00:00:00.001 Started by upstream project "autotest-spdk-v24.01-LTS-vs-dpdk-v23.11" build number 1052 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3719 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.068 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.069 The recommended git tool is: git 00:00:00.069 using credential 00000000-0000-0000-0000-000000000002 00:00:00.070 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.099 Fetching changes from the remote Git repository 00:00:00.102 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.141 Using shallow fetch with depth 1 00:00:00.141 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.141 > git --version # timeout=10 00:00:00.171 > git --version # 'git version 2.39.2' 00:00:00.171 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.194 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.194 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.140 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.150 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.160 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.160 > git config core.sparsecheckout # timeout=10 00:00:05.227 > git read-tree -mu HEAD # timeout=10 00:00:05.245 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.264 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.264 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.347 [Pipeline] Start of Pipeline 00:00:05.357 [Pipeline] library 00:00:05.358 Loading library shm_lib@master 00:00:05.358 Library shm_lib@master is cached. Copying from home. 00:00:05.368 [Pipeline] node 00:00:05.378 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:05.379 [Pipeline] { 00:00:05.386 [Pipeline] catchError 00:00:05.387 [Pipeline] { 00:00:05.395 [Pipeline] wrap 00:00:05.400 [Pipeline] { 00:00:05.405 [Pipeline] stage 00:00:05.406 [Pipeline] { (Prologue) 00:00:05.418 [Pipeline] echo 00:00:05.419 Node: VM-host-SM9 00:00:05.423 [Pipeline] cleanWs 00:00:05.431 [WS-CLEANUP] Deleting project workspace... 00:00:05.431 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.437 [WS-CLEANUP] done 00:00:05.633 [Pipeline] setCustomBuildProperty 00:00:05.717 [Pipeline] httpRequest 00:00:06.060 [Pipeline] echo 00:00:06.061 Sorcerer 10.211.164.20 is alive 00:00:06.069 [Pipeline] retry 00:00:06.070 [Pipeline] { 00:00:06.082 [Pipeline] httpRequest 00:00:06.086 HttpMethod: GET 00:00:06.086 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.086 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.104 Response Code: HTTP/1.1 200 OK 00:00:06.104 Success: Status code 200 is in the accepted range: 200,404 00:00:06.104 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.966 [Pipeline] } 00:00:08.983 [Pipeline] // retry 00:00:08.990 [Pipeline] sh 00:00:09.272 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.287 [Pipeline] httpRequest 00:00:09.656 [Pipeline] echo 00:00:09.658 Sorcerer 10.211.164.20 is alive 00:00:09.668 [Pipeline] retry 00:00:09.670 [Pipeline] { 00:00:09.684 [Pipeline] httpRequest 00:00:09.688 HttpMethod: GET 00:00:09.689 URL: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:09.689 Sending request to url: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:09.705 Response Code: HTTP/1.1 200 OK 00:00:09.706 Success: Status code 200 is in the accepted range: 200,404 00:00:09.706 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:29.289 [Pipeline] } 00:01:29.307 [Pipeline] // retry 00:01:29.315 [Pipeline] sh 00:01:29.597 + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:32.202 [Pipeline] sh 00:01:32.479 + git -C spdk log --oneline -n5 00:01:32.479 c13c99a5e test: Various fixes for Fedora40 00:01:32.479 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:01:32.480 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:01:32.480 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:01:32.480 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:01:32.498 [Pipeline] withCredentials 00:01:32.508 > git --version # timeout=10 00:01:32.522 > git --version # 'git version 2.39.2' 00:01:32.537 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:32.539 [Pipeline] { 00:01:32.549 [Pipeline] retry 00:01:32.551 [Pipeline] { 00:01:32.566 [Pipeline] sh 00:01:32.902 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:01:33.172 [Pipeline] } 00:01:33.189 [Pipeline] // retry 00:01:33.194 [Pipeline] } 00:01:33.210 [Pipeline] // withCredentials 00:01:33.219 [Pipeline] httpRequest 00:01:33.568 [Pipeline] echo 00:01:33.569 Sorcerer 10.211.164.20 is alive 00:01:33.594 [Pipeline] retry 00:01:33.596 [Pipeline] { 00:01:33.609 [Pipeline] httpRequest 00:01:33.614 HttpMethod: GET 00:01:33.615 URL: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:33.615 Sending request to url: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:33.620 Response Code: HTTP/1.1 200 OK 00:01:33.621 Success: Status code 200 is in the accepted range: 200,404 00:01:33.621 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:40.098 [Pipeline] } 00:01:40.114 [Pipeline] // retry 00:01:40.121 [Pipeline] sh 00:01:40.399 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:41.789 [Pipeline] sh 00:01:42.069 + git -C dpdk log --oneline -n5 00:01:42.069 eeb0605f11 version: 23.11.0 00:01:42.069 238778122a doc: update release notes for 23.11 00:01:42.069 46aa6b3cfc doc: fix description of RSS features 00:01:42.069 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:42.069 7e421ae345 devtools: support skipping forbid rule check 00:01:42.089 [Pipeline] writeFile 00:01:42.106 [Pipeline] sh 00:01:42.389 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:42.401 [Pipeline] sh 00:01:42.682 + cat autorun-spdk.conf 00:01:42.682 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:42.682 SPDK_TEST_NVMF=1 00:01:42.682 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:42.682 SPDK_TEST_URING=1 00:01:42.682 SPDK_TEST_USDT=1 00:01:42.682 SPDK_RUN_UBSAN=1 00:01:42.682 NET_TYPE=virt 00:01:42.682 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:42.682 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:42.682 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:42.690 RUN_NIGHTLY=1 00:01:42.692 [Pipeline] } 00:01:42.706 [Pipeline] // stage 00:01:42.724 [Pipeline] stage 00:01:42.726 [Pipeline] { (Run VM) 00:01:42.740 [Pipeline] sh 00:01:43.022 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:43.022 + echo 'Start stage prepare_nvme.sh' 00:01:43.022 Start stage prepare_nvme.sh 00:01:43.022 + [[ -n 1 ]] 00:01:43.022 + disk_prefix=ex1 00:01:43.022 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:01:43.022 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:01:43.022 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:01:43.022 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:43.022 ++ SPDK_TEST_NVMF=1 00:01:43.022 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:43.022 ++ SPDK_TEST_URING=1 00:01:43.022 ++ SPDK_TEST_USDT=1 00:01:43.022 ++ SPDK_RUN_UBSAN=1 00:01:43.022 ++ NET_TYPE=virt 00:01:43.022 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:43.022 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:43.022 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:43.022 ++ RUN_NIGHTLY=1 00:01:43.022 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:43.022 + nvme_files=() 00:01:43.022 + declare -A nvme_files 00:01:43.022 + backend_dir=/var/lib/libvirt/images/backends 00:01:43.022 + nvme_files['nvme.img']=5G 00:01:43.022 + nvme_files['nvme-cmb.img']=5G 00:01:43.022 + nvme_files['nvme-multi0.img']=4G 00:01:43.022 + nvme_files['nvme-multi1.img']=4G 00:01:43.023 + nvme_files['nvme-multi2.img']=4G 00:01:43.023 + nvme_files['nvme-openstack.img']=8G 00:01:43.023 + nvme_files['nvme-zns.img']=5G 00:01:43.023 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:43.023 + (( SPDK_TEST_FTL == 1 )) 00:01:43.023 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:43.023 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:43.023 + for nvme in "${!nvme_files[@]}" 00:01:43.023 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:01:43.023 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:43.023 + for nvme in "${!nvme_files[@]}" 00:01:43.023 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:01:43.023 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:43.023 + for nvme in "${!nvme_files[@]}" 00:01:43.023 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:01:43.282 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:43.282 + for nvme in "${!nvme_files[@]}" 00:01:43.282 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:01:43.282 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:43.282 + for nvme in "${!nvme_files[@]}" 00:01:43.282 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:01:43.282 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:43.282 + for nvme in "${!nvme_files[@]}" 00:01:43.282 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:01:43.540 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:43.540 + for nvme in "${!nvme_files[@]}" 00:01:43.540 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:01:43.540 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:43.540 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:01:43.799 + echo 'End stage prepare_nvme.sh' 00:01:43.799 End stage prepare_nvme.sh 00:01:43.811 [Pipeline] sh 00:01:44.093 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:44.093 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -H -a -v -f fedora39 00:01:44.093 00:01:44.093 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:01:44.093 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:01:44.093 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:44.093 HELP=0 00:01:44.093 DRY_RUN=0 00:01:44.093 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img, 00:01:44.093 NVME_DISKS_TYPE=nvme,nvme, 00:01:44.093 NVME_AUTO_CREATE=0 00:01:44.093 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img, 00:01:44.093 NVME_CMB=,, 00:01:44.093 NVME_PMR=,, 00:01:44.093 NVME_ZNS=,, 00:01:44.093 NVME_MS=,, 00:01:44.093 NVME_FDP=,, 00:01:44.093 SPDK_VAGRANT_DISTRO=fedora39 00:01:44.093 SPDK_VAGRANT_VMCPU=10 00:01:44.093 SPDK_VAGRANT_VMRAM=12288 00:01:44.093 SPDK_VAGRANT_PROVIDER=libvirt 00:01:44.093 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:44.093 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:44.093 SPDK_OPENSTACK_NETWORK=0 00:01:44.093 VAGRANT_PACKAGE_BOX=0 00:01:44.093 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:44.093 FORCE_DISTRO=true 00:01:44.093 VAGRANT_BOX_VERSION= 00:01:44.093 EXTRA_VAGRANTFILES= 00:01:44.093 NIC_MODEL=e1000 00:01:44.093 00:01:44.093 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:01:44.093 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:46.631 Bringing machine 'default' up with 'libvirt' provider... 00:01:47.199 ==> default: Creating image (snapshot of base box volume). 00:01:47.459 ==> default: Creating domain with the following settings... 00:01:47.459 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1734072051_eb28b4abe76d51b650e8 00:01:47.459 ==> default: -- Domain type: kvm 00:01:47.459 ==> default: -- Cpus: 10 00:01:47.459 ==> default: -- Feature: acpi 00:01:47.459 ==> default: -- Feature: apic 00:01:47.459 ==> default: -- Feature: pae 00:01:47.459 ==> default: -- Memory: 12288M 00:01:47.459 ==> default: -- Memory Backing: hugepages: 00:01:47.459 ==> default: -- Management MAC: 00:01:47.459 ==> default: -- Loader: 00:01:47.459 ==> default: -- Nvram: 00:01:47.459 ==> default: -- Base box: spdk/fedora39 00:01:47.459 ==> default: -- Storage pool: default 00:01:47.459 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1734072051_eb28b4abe76d51b650e8.img (20G) 00:01:47.459 ==> default: -- Volume Cache: default 00:01:47.459 ==> default: -- Kernel: 00:01:47.459 ==> default: -- Initrd: 00:01:47.459 ==> default: -- Graphics Type: vnc 00:01:47.459 ==> default: -- Graphics Port: -1 00:01:47.459 ==> default: -- Graphics IP: 127.0.0.1 00:01:47.459 ==> default: -- Graphics Password: Not defined 00:01:47.459 ==> default: -- Video Type: cirrus 00:01:47.459 ==> default: -- Video VRAM: 9216 00:01:47.459 ==> default: -- Sound Type: 00:01:47.459 ==> default: -- Keymap: en-us 00:01:47.459 ==> default: -- TPM Path: 00:01:47.459 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:47.459 ==> default: -- Command line args: 00:01:47.459 ==> default: -> value=-device, 00:01:47.459 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:47.459 ==> default: -> value=-drive, 00:01:47.459 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-0-drive0, 00:01:47.459 ==> default: -> value=-device, 00:01:47.459 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:47.459 ==> default: -> value=-device, 00:01:47.459 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:01:47.459 ==> default: -> value=-drive, 00:01:47.459 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:47.459 ==> default: -> value=-device, 00:01:47.459 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:47.459 ==> default: -> value=-drive, 00:01:47.459 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:47.459 ==> default: -> value=-device, 00:01:47.459 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:47.459 ==> default: -> value=-drive, 00:01:47.459 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:47.459 ==> default: -> value=-device, 00:01:47.459 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:47.459 ==> default: Creating shared folders metadata... 00:01:47.459 ==> default: Starting domain. 00:01:48.841 ==> default: Waiting for domain to get an IP address... 00:02:03.804 ==> default: Waiting for SSH to become available... 00:02:05.182 ==> default: Configuring and enabling network interfaces... 00:02:09.576 default: SSH address: 192.168.121.55:22 00:02:09.576 default: SSH username: vagrant 00:02:09.576 default: SSH auth method: private key 00:02:11.483 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:19.604 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:24.876 ==> default: Mounting SSHFS shared folder... 00:02:25.835 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:25.835 ==> default: Checking Mount.. 00:02:26.772 ==> default: Folder Successfully Mounted! 00:02:26.772 ==> default: Running provisioner: file... 00:02:27.707 default: ~/.gitconfig => .gitconfig 00:02:28.274 00:02:28.274 SUCCESS! 00:02:28.274 00:02:28.274 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:28.274 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:28.275 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:28.275 00:02:28.283 [Pipeline] } 00:02:28.299 [Pipeline] // stage 00:02:28.308 [Pipeline] dir 00:02:28.309 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:02:28.311 [Pipeline] { 00:02:28.323 [Pipeline] catchError 00:02:28.325 [Pipeline] { 00:02:28.338 [Pipeline] sh 00:02:28.618 + vagrant ssh-config --host vagrant 00:02:28.618 + sed -ne /^Host/,$p 00:02:28.618 + tee ssh_conf 00:02:32.807 Host vagrant 00:02:32.807 HostName 192.168.121.55 00:02:32.807 User vagrant 00:02:32.807 Port 22 00:02:32.807 UserKnownHostsFile /dev/null 00:02:32.807 StrictHostKeyChecking no 00:02:32.807 PasswordAuthentication no 00:02:32.807 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:32.807 IdentitiesOnly yes 00:02:32.807 LogLevel FATAL 00:02:32.807 ForwardAgent yes 00:02:32.807 ForwardX11 yes 00:02:32.808 00:02:32.820 [Pipeline] withEnv 00:02:32.822 [Pipeline] { 00:02:32.836 [Pipeline] sh 00:02:33.115 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:33.116 source /etc/os-release 00:02:33.116 [[ -e /image.version ]] && img=$(< /image.version) 00:02:33.116 # Minimal, systemd-like check. 00:02:33.116 if [[ -e /.dockerenv ]]; then 00:02:33.116 # Clear garbage from the node's name: 00:02:33.116 # agt-er_autotest_547-896 -> autotest_547-896 00:02:33.116 # $HOSTNAME is the actual container id 00:02:33.116 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:33.116 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:33.116 # We can assume this is a mount from a host where container is running, 00:02:33.116 # so fetch its hostname to easily identify the target swarm worker. 00:02:33.116 container="$(< /etc/hostname) ($agent)" 00:02:33.116 else 00:02:33.116 # Fallback 00:02:33.116 container=$agent 00:02:33.116 fi 00:02:33.116 fi 00:02:33.116 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:33.116 00:02:33.386 [Pipeline] } 00:02:33.402 [Pipeline] // withEnv 00:02:33.410 [Pipeline] setCustomBuildProperty 00:02:33.425 [Pipeline] stage 00:02:33.427 [Pipeline] { (Tests) 00:02:33.443 [Pipeline] sh 00:02:33.723 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:33.995 [Pipeline] sh 00:02:34.274 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:34.547 [Pipeline] timeout 00:02:34.548 Timeout set to expire in 1 hr 0 min 00:02:34.549 [Pipeline] { 00:02:34.565 [Pipeline] sh 00:02:34.846 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:35.416 HEAD is now at c13c99a5e test: Various fixes for Fedora40 00:02:35.467 [Pipeline] sh 00:02:35.784 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:36.055 [Pipeline] sh 00:02:36.336 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:36.350 [Pipeline] sh 00:02:36.633 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:02:36.892 ++ readlink -f spdk_repo 00:02:36.892 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:36.892 + [[ -n /home/vagrant/spdk_repo ]] 00:02:36.892 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:36.892 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:36.892 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:36.892 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:36.892 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:36.892 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:02:36.892 + cd /home/vagrant/spdk_repo 00:02:36.892 + source /etc/os-release 00:02:36.892 ++ NAME='Fedora Linux' 00:02:36.892 ++ VERSION='39 (Cloud Edition)' 00:02:36.892 ++ ID=fedora 00:02:36.892 ++ VERSION_ID=39 00:02:36.892 ++ VERSION_CODENAME= 00:02:36.892 ++ PLATFORM_ID=platform:f39 00:02:36.892 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:36.892 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:36.892 ++ LOGO=fedora-logo-icon 00:02:36.892 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:36.892 ++ HOME_URL=https://fedoraproject.org/ 00:02:36.892 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:36.892 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:36.892 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:36.892 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:36.892 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:36.892 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:36.892 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:36.892 ++ SUPPORT_END=2024-11-12 00:02:36.892 ++ VARIANT='Cloud Edition' 00:02:36.892 ++ VARIANT_ID=cloud 00:02:36.892 + uname -a 00:02:36.892 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:36.892 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:36.892 Hugepages 00:02:36.892 node hugesize free / total 00:02:36.892 node0 1048576kB 0 / 0 00:02:36.892 node0 2048kB 0 / 0 00:02:36.892 00:02:36.892 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:36.892 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:36.892 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:36.892 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:36.892 + rm -f /tmp/spdk-ld-path 00:02:36.892 + source autorun-spdk.conf 00:02:36.892 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:36.892 ++ SPDK_TEST_NVMF=1 00:02:36.892 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:36.892 ++ SPDK_TEST_URING=1 00:02:36.892 ++ SPDK_TEST_USDT=1 00:02:36.892 ++ SPDK_RUN_UBSAN=1 00:02:36.892 ++ NET_TYPE=virt 00:02:36.892 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:36.892 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:36.892 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:36.892 ++ RUN_NIGHTLY=1 00:02:36.892 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:36.892 + [[ -n '' ]] 00:02:36.892 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:36.892 + for M in /var/spdk/build-*-manifest.txt 00:02:36.892 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:37.151 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:37.151 + for M in /var/spdk/build-*-manifest.txt 00:02:37.151 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:37.151 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:37.151 + for M in /var/spdk/build-*-manifest.txt 00:02:37.151 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:37.151 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:37.151 ++ uname 00:02:37.151 + [[ Linux == \L\i\n\u\x ]] 00:02:37.151 + sudo dmesg -T 00:02:37.151 + sudo dmesg --clear 00:02:37.151 + dmesg_pid=5978 00:02:37.151 + [[ Fedora Linux == FreeBSD ]] 00:02:37.151 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:37.151 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:37.151 + sudo dmesg -Tw 00:02:37.151 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:37.151 + [[ -x /usr/src/fio-static/fio ]] 00:02:37.151 + export FIO_BIN=/usr/src/fio-static/fio 00:02:37.151 + FIO_BIN=/usr/src/fio-static/fio 00:02:37.151 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:37.151 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:37.151 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:37.151 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:37.151 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:37.151 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:37.151 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:37.151 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:37.152 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:37.152 Test configuration: 00:02:37.152 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:37.152 SPDK_TEST_NVMF=1 00:02:37.152 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:37.152 SPDK_TEST_URING=1 00:02:37.152 SPDK_TEST_USDT=1 00:02:37.152 SPDK_RUN_UBSAN=1 00:02:37.152 NET_TYPE=virt 00:02:37.152 SPDK_TEST_NATIVE_DPDK=v23.11 00:02:37.152 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:37.152 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:37.152 RUN_NIGHTLY=1 06:41:41 -- common/autotest_common.sh@1689 -- $ [[ n == y ]] 00:02:37.152 06:41:41 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:37.152 06:41:41 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:37.152 06:41:41 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:37.152 06:41:41 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:37.152 06:41:41 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:37.152 06:41:41 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:37.152 06:41:41 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:37.152 06:41:41 -- paths/export.sh@5 -- $ export PATH 00:02:37.152 06:41:41 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:37.152 06:41:41 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:37.152 06:41:41 -- common/autobuild_common.sh@440 -- $ date +%s 00:02:37.152 06:41:41 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1734072101.XXXXXX 00:02:37.152 06:41:41 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1734072101.86cwSy 00:02:37.152 06:41:41 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:02:37.152 06:41:41 -- common/autobuild_common.sh@446 -- $ '[' -n v23.11 ']' 00:02:37.152 06:41:41 -- common/autobuild_common.sh@447 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:37.152 06:41:41 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:37.152 06:41:41 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:37.152 06:41:41 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:37.152 06:41:41 -- common/autobuild_common.sh@456 -- $ get_config_params 00:02:37.152 06:41:41 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:02:37.152 06:41:41 -- common/autotest_common.sh@10 -- $ set +x 00:02:37.152 06:41:41 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:02:37.152 06:41:41 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:37.152 06:41:41 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:37.152 06:41:41 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:37.152 06:41:41 -- spdk/autobuild.sh@16 -- $ date -u 00:02:37.152 Fri Dec 13 06:41:41 AM UTC 2024 00:02:37.152 06:41:41 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:37.152 LTS-67-gc13c99a5e 00:02:37.152 06:41:41 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:37.152 06:41:41 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:37.152 06:41:41 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:37.152 06:41:41 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:37.152 06:41:41 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:37.152 06:41:41 -- common/autotest_common.sh@10 -- $ set +x 00:02:37.152 ************************************ 00:02:37.152 START TEST ubsan 00:02:37.152 ************************************ 00:02:37.152 using ubsan 00:02:37.152 06:41:41 -- common/autotest_common.sh@1114 -- $ echo 'using ubsan' 00:02:37.152 00:02:37.152 real 0m0.000s 00:02:37.152 user 0m0.000s 00:02:37.152 sys 0m0.000s 00:02:37.152 06:41:41 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:37.152 ************************************ 00:02:37.152 06:41:41 -- common/autotest_common.sh@10 -- $ set +x 00:02:37.152 END TEST ubsan 00:02:37.152 ************************************ 00:02:37.411 06:41:41 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:02:37.411 06:41:41 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:37.411 06:41:41 -- common/autobuild_common.sh@432 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:37.411 06:41:41 -- common/autotest_common.sh@1087 -- $ '[' 2 -le 1 ']' 00:02:37.411 06:41:41 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:37.411 06:41:41 -- common/autotest_common.sh@10 -- $ set +x 00:02:37.411 ************************************ 00:02:37.411 START TEST build_native_dpdk 00:02:37.411 ************************************ 00:02:37.411 06:41:41 -- common/autotest_common.sh@1114 -- $ _build_native_dpdk 00:02:37.411 06:41:41 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:37.411 06:41:41 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:37.411 06:41:41 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:37.411 06:41:41 -- common/autobuild_common.sh@51 -- $ local compiler 00:02:37.411 06:41:41 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:37.411 06:41:41 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:37.411 06:41:41 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:37.411 06:41:41 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:37.411 06:41:41 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:37.411 06:41:41 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:37.411 06:41:41 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:37.411 06:41:41 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:37.411 06:41:41 -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:37.411 06:41:41 -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:37.411 06:41:41 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:37.411 06:41:41 -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:37.411 06:41:41 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:37.411 06:41:41 -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:37.411 06:41:41 -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:37.411 06:41:41 -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:37.411 eeb0605f11 version: 23.11.0 00:02:37.411 238778122a doc: update release notes for 23.11 00:02:37.411 46aa6b3cfc doc: fix description of RSS features 00:02:37.411 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:37.411 7e421ae345 devtools: support skipping forbid rule check 00:02:37.411 06:41:41 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:37.411 06:41:41 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:37.411 06:41:41 -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:02:37.411 06:41:41 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:37.411 06:41:41 -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:37.411 06:41:41 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:37.411 06:41:41 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:37.411 06:41:41 -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:37.411 06:41:41 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:37.411 06:41:41 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:37.411 06:41:41 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:37.411 06:41:41 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:37.411 06:41:41 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:37.411 06:41:41 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:37.411 06:41:41 -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:37.411 06:41:41 -- common/autobuild_common.sh@168 -- $ uname -s 00:02:37.411 06:41:41 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:37.411 06:41:41 -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:02:37.411 06:41:41 -- scripts/common.sh@372 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:02:37.411 06:41:41 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:02:37.411 06:41:41 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:02:37.411 06:41:41 -- scripts/common.sh@335 -- $ IFS=.-: 00:02:37.411 06:41:41 -- scripts/common.sh@335 -- $ read -ra ver1 00:02:37.411 06:41:41 -- scripts/common.sh@336 -- $ IFS=.-: 00:02:37.411 06:41:41 -- scripts/common.sh@336 -- $ read -ra ver2 00:02:37.411 06:41:41 -- scripts/common.sh@337 -- $ local 'op=<' 00:02:37.411 06:41:41 -- scripts/common.sh@339 -- $ ver1_l=3 00:02:37.411 06:41:41 -- scripts/common.sh@340 -- $ ver2_l=3 00:02:37.411 06:41:41 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:02:37.411 06:41:41 -- scripts/common.sh@343 -- $ case "$op" in 00:02:37.411 06:41:41 -- scripts/common.sh@344 -- $ : 1 00:02:37.411 06:41:41 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:02:37.411 06:41:41 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:37.411 06:41:41 -- scripts/common.sh@364 -- $ decimal 23 00:02:37.411 06:41:41 -- scripts/common.sh@352 -- $ local d=23 00:02:37.411 06:41:41 -- scripts/common.sh@353 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:37.411 06:41:41 -- scripts/common.sh@354 -- $ echo 23 00:02:37.411 06:41:41 -- scripts/common.sh@364 -- $ ver1[v]=23 00:02:37.411 06:41:41 -- scripts/common.sh@365 -- $ decimal 21 00:02:37.411 06:41:41 -- scripts/common.sh@352 -- $ local d=21 00:02:37.411 06:41:41 -- scripts/common.sh@353 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:37.411 06:41:41 -- scripts/common.sh@354 -- $ echo 21 00:02:37.411 06:41:41 -- scripts/common.sh@365 -- $ ver2[v]=21 00:02:37.411 06:41:41 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:02:37.411 06:41:41 -- scripts/common.sh@366 -- $ return 1 00:02:37.411 06:41:41 -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:37.411 patching file config/rte_config.h 00:02:37.411 Hunk #1 succeeded at 60 (offset 1 line). 00:02:37.411 06:41:41 -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0 00:02:37.411 06:41:41 -- scripts/common.sh@372 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:02:37.411 06:41:41 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:02:37.411 06:41:41 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:02:37.411 06:41:41 -- scripts/common.sh@335 -- $ IFS=.-: 00:02:37.411 06:41:41 -- scripts/common.sh@335 -- $ read -ra ver1 00:02:37.411 06:41:41 -- scripts/common.sh@336 -- $ IFS=.-: 00:02:37.411 06:41:41 -- scripts/common.sh@336 -- $ read -ra ver2 00:02:37.411 06:41:41 -- scripts/common.sh@337 -- $ local 'op=<' 00:02:37.411 06:41:41 -- scripts/common.sh@339 -- $ ver1_l=3 00:02:37.411 06:41:41 -- scripts/common.sh@340 -- $ ver2_l=3 00:02:37.411 06:41:41 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:02:37.411 06:41:41 -- scripts/common.sh@343 -- $ case "$op" in 00:02:37.411 06:41:41 -- scripts/common.sh@344 -- $ : 1 00:02:37.411 06:41:41 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:02:37.411 06:41:41 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:37.411 06:41:41 -- scripts/common.sh@364 -- $ decimal 23 00:02:37.412 06:41:41 -- scripts/common.sh@352 -- $ local d=23 00:02:37.412 06:41:41 -- scripts/common.sh@353 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:37.412 06:41:41 -- scripts/common.sh@354 -- $ echo 23 00:02:37.412 06:41:41 -- scripts/common.sh@364 -- $ ver1[v]=23 00:02:37.412 06:41:41 -- scripts/common.sh@365 -- $ decimal 24 00:02:37.412 06:41:41 -- scripts/common.sh@352 -- $ local d=24 00:02:37.412 06:41:41 -- scripts/common.sh@353 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:37.412 06:41:41 -- scripts/common.sh@354 -- $ echo 24 00:02:37.412 06:41:41 -- scripts/common.sh@365 -- $ ver2[v]=24 00:02:37.412 06:41:41 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:02:37.412 06:41:41 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:02:37.412 06:41:41 -- scripts/common.sh@367 -- $ return 0 00:02:37.412 06:41:41 -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:37.412 patching file lib/pcapng/rte_pcapng.c 00:02:37.412 06:41:41 -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:02:37.412 06:41:41 -- common/autobuild_common.sh@181 -- $ uname -s 00:02:37.412 06:41:41 -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:02:37.412 06:41:41 -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:37.412 06:41:41 -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:42.679 The Meson build system 00:02:42.679 Version: 1.5.0 00:02:42.679 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:42.679 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:42.679 Build type: native build 00:02:42.679 Program cat found: YES (/usr/bin/cat) 00:02:42.679 Project name: DPDK 00:02:42.679 Project version: 23.11.0 00:02:42.679 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:42.679 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:42.679 Host machine cpu family: x86_64 00:02:42.679 Host machine cpu: x86_64 00:02:42.679 Message: ## Building in Developer Mode ## 00:02:42.679 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:42.679 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:42.679 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:42.679 Program python3 found: YES (/usr/bin/python3) 00:02:42.679 Program cat found: YES (/usr/bin/cat) 00:02:42.679 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:42.679 Compiler for C supports arguments -march=native: YES 00:02:42.679 Checking for size of "void *" : 8 00:02:42.679 Checking for size of "void *" : 8 (cached) 00:02:42.679 Library m found: YES 00:02:42.679 Library numa found: YES 00:02:42.679 Has header "numaif.h" : YES 00:02:42.679 Library fdt found: NO 00:02:42.679 Library execinfo found: NO 00:02:42.679 Has header "execinfo.h" : YES 00:02:42.679 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:42.679 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:42.679 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:42.679 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:42.679 Run-time dependency openssl found: YES 3.1.1 00:02:42.679 Run-time dependency libpcap found: YES 1.10.4 00:02:42.679 Has header "pcap.h" with dependency libpcap: YES 00:02:42.679 Compiler for C supports arguments -Wcast-qual: YES 00:02:42.679 Compiler for C supports arguments -Wdeprecated: YES 00:02:42.679 Compiler for C supports arguments -Wformat: YES 00:02:42.679 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:42.679 Compiler for C supports arguments -Wformat-security: NO 00:02:42.679 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:42.679 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:42.679 Compiler for C supports arguments -Wnested-externs: YES 00:02:42.679 Compiler for C supports arguments -Wold-style-definition: YES 00:02:42.679 Compiler for C supports arguments -Wpointer-arith: YES 00:02:42.679 Compiler for C supports arguments -Wsign-compare: YES 00:02:42.679 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:42.679 Compiler for C supports arguments -Wundef: YES 00:02:42.679 Compiler for C supports arguments -Wwrite-strings: YES 00:02:42.679 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:42.679 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:42.679 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:42.679 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:42.679 Program objdump found: YES (/usr/bin/objdump) 00:02:42.679 Compiler for C supports arguments -mavx512f: YES 00:02:42.679 Checking if "AVX512 checking" compiles: YES 00:02:42.679 Fetching value of define "__SSE4_2__" : 1 00:02:42.679 Fetching value of define "__AES__" : 1 00:02:42.679 Fetching value of define "__AVX__" : 1 00:02:42.679 Fetching value of define "__AVX2__" : 1 00:02:42.679 Fetching value of define "__AVX512BW__" : (undefined) 00:02:42.679 Fetching value of define "__AVX512CD__" : (undefined) 00:02:42.679 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:42.679 Fetching value of define "__AVX512F__" : (undefined) 00:02:42.679 Fetching value of define "__AVX512VL__" : (undefined) 00:02:42.679 Fetching value of define "__PCLMUL__" : 1 00:02:42.679 Fetching value of define "__RDRND__" : 1 00:02:42.679 Fetching value of define "__RDSEED__" : 1 00:02:42.679 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:42.679 Fetching value of define "__znver1__" : (undefined) 00:02:42.679 Fetching value of define "__znver2__" : (undefined) 00:02:42.679 Fetching value of define "__znver3__" : (undefined) 00:02:42.679 Fetching value of define "__znver4__" : (undefined) 00:02:42.680 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:42.680 Message: lib/log: Defining dependency "log" 00:02:42.680 Message: lib/kvargs: Defining dependency "kvargs" 00:02:42.680 Message: lib/telemetry: Defining dependency "telemetry" 00:02:42.680 Checking for function "getentropy" : NO 00:02:42.680 Message: lib/eal: Defining dependency "eal" 00:02:42.680 Message: lib/ring: Defining dependency "ring" 00:02:42.680 Message: lib/rcu: Defining dependency "rcu" 00:02:42.680 Message: lib/mempool: Defining dependency "mempool" 00:02:42.680 Message: lib/mbuf: Defining dependency "mbuf" 00:02:42.680 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:42.680 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:42.680 Compiler for C supports arguments -mpclmul: YES 00:02:42.680 Compiler for C supports arguments -maes: YES 00:02:42.680 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:42.680 Compiler for C supports arguments -mavx512bw: YES 00:02:42.680 Compiler for C supports arguments -mavx512dq: YES 00:02:42.680 Compiler for C supports arguments -mavx512vl: YES 00:02:42.680 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:42.680 Compiler for C supports arguments -mavx2: YES 00:02:42.680 Compiler for C supports arguments -mavx: YES 00:02:42.680 Message: lib/net: Defining dependency "net" 00:02:42.680 Message: lib/meter: Defining dependency "meter" 00:02:42.680 Message: lib/ethdev: Defining dependency "ethdev" 00:02:42.680 Message: lib/pci: Defining dependency "pci" 00:02:42.680 Message: lib/cmdline: Defining dependency "cmdline" 00:02:42.680 Message: lib/metrics: Defining dependency "metrics" 00:02:42.680 Message: lib/hash: Defining dependency "hash" 00:02:42.680 Message: lib/timer: Defining dependency "timer" 00:02:42.680 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:42.680 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:02:42.680 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:02:42.680 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:02:42.680 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:02:42.680 Message: lib/acl: Defining dependency "acl" 00:02:42.680 Message: lib/bbdev: Defining dependency "bbdev" 00:02:42.680 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:42.680 Run-time dependency libelf found: YES 0.191 00:02:42.680 Message: lib/bpf: Defining dependency "bpf" 00:02:42.680 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:42.680 Message: lib/compressdev: Defining dependency "compressdev" 00:02:42.680 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:42.680 Message: lib/distributor: Defining dependency "distributor" 00:02:42.680 Message: lib/dmadev: Defining dependency "dmadev" 00:02:42.680 Message: lib/efd: Defining dependency "efd" 00:02:42.680 Message: lib/eventdev: Defining dependency "eventdev" 00:02:42.680 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:42.680 Message: lib/gpudev: Defining dependency "gpudev" 00:02:42.680 Message: lib/gro: Defining dependency "gro" 00:02:42.680 Message: lib/gso: Defining dependency "gso" 00:02:42.680 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:42.680 Message: lib/jobstats: Defining dependency "jobstats" 00:02:42.680 Message: lib/latencystats: Defining dependency "latencystats" 00:02:42.680 Message: lib/lpm: Defining dependency "lpm" 00:02:42.680 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:42.680 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:42.680 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:42.680 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:42.680 Message: lib/member: Defining dependency "member" 00:02:42.680 Message: lib/pcapng: Defining dependency "pcapng" 00:02:42.680 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:42.680 Message: lib/power: Defining dependency "power" 00:02:42.680 Message: lib/rawdev: Defining dependency "rawdev" 00:02:42.680 Message: lib/regexdev: Defining dependency "regexdev" 00:02:42.680 Message: lib/mldev: Defining dependency "mldev" 00:02:42.680 Message: lib/rib: Defining dependency "rib" 00:02:42.680 Message: lib/reorder: Defining dependency "reorder" 00:02:42.680 Message: lib/sched: Defining dependency "sched" 00:02:42.680 Message: lib/security: Defining dependency "security" 00:02:42.680 Message: lib/stack: Defining dependency "stack" 00:02:42.680 Has header "linux/userfaultfd.h" : YES 00:02:42.680 Has header "linux/vduse.h" : YES 00:02:42.680 Message: lib/vhost: Defining dependency "vhost" 00:02:42.680 Message: lib/ipsec: Defining dependency "ipsec" 00:02:42.680 Message: lib/pdcp: Defining dependency "pdcp" 00:02:42.680 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:42.680 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:42.680 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:02:42.680 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:42.680 Message: lib/fib: Defining dependency "fib" 00:02:42.680 Message: lib/port: Defining dependency "port" 00:02:42.680 Message: lib/pdump: Defining dependency "pdump" 00:02:42.680 Message: lib/table: Defining dependency "table" 00:02:42.680 Message: lib/pipeline: Defining dependency "pipeline" 00:02:42.680 Message: lib/graph: Defining dependency "graph" 00:02:42.680 Message: lib/node: Defining dependency "node" 00:02:42.680 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:44.584 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:44.584 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:44.584 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:44.584 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:44.584 Compiler for C supports arguments -Wno-unused-value: YES 00:02:44.584 Compiler for C supports arguments -Wno-format: YES 00:02:44.584 Compiler for C supports arguments -Wno-format-security: YES 00:02:44.584 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:44.584 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:44.584 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:44.584 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:44.584 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:44.584 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:44.584 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:44.584 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:44.584 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:44.584 Has header "sys/epoll.h" : YES 00:02:44.584 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:44.584 Configuring doxy-api-html.conf using configuration 00:02:44.584 Configuring doxy-api-man.conf using configuration 00:02:44.584 Program mandb found: YES (/usr/bin/mandb) 00:02:44.584 Program sphinx-build found: NO 00:02:44.584 Configuring rte_build_config.h using configuration 00:02:44.584 Message: 00:02:44.584 ================= 00:02:44.584 Applications Enabled 00:02:44.584 ================= 00:02:44.584 00:02:44.584 apps: 00:02:44.584 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:44.584 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:44.584 test-pmd, test-regex, test-sad, test-security-perf, 00:02:44.584 00:02:44.584 Message: 00:02:44.584 ================= 00:02:44.584 Libraries Enabled 00:02:44.584 ================= 00:02:44.584 00:02:44.584 libs: 00:02:44.584 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:44.584 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:02:44.584 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:02:44.584 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:02:44.584 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:02:44.584 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:02:44.584 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:02:44.584 00:02:44.584 00:02:44.584 Message: 00:02:44.584 =============== 00:02:44.584 Drivers Enabled 00:02:44.584 =============== 00:02:44.584 00:02:44.584 common: 00:02:44.584 00:02:44.584 bus: 00:02:44.584 pci, vdev, 00:02:44.584 mempool: 00:02:44.584 ring, 00:02:44.584 dma: 00:02:44.584 00:02:44.584 net: 00:02:44.584 i40e, 00:02:44.584 raw: 00:02:44.584 00:02:44.584 crypto: 00:02:44.584 00:02:44.584 compress: 00:02:44.584 00:02:44.584 regex: 00:02:44.584 00:02:44.584 ml: 00:02:44.584 00:02:44.584 vdpa: 00:02:44.584 00:02:44.584 event: 00:02:44.584 00:02:44.584 baseband: 00:02:44.584 00:02:44.584 gpu: 00:02:44.584 00:02:44.584 00:02:44.584 Message: 00:02:44.584 ================= 00:02:44.584 Content Skipped 00:02:44.584 ================= 00:02:44.584 00:02:44.584 apps: 00:02:44.584 00:02:44.584 libs: 00:02:44.584 00:02:44.584 drivers: 00:02:44.584 common/cpt: not in enabled drivers build config 00:02:44.584 common/dpaax: not in enabled drivers build config 00:02:44.584 common/iavf: not in enabled drivers build config 00:02:44.584 common/idpf: not in enabled drivers build config 00:02:44.584 common/mvep: not in enabled drivers build config 00:02:44.584 common/octeontx: not in enabled drivers build config 00:02:44.584 bus/auxiliary: not in enabled drivers build config 00:02:44.584 bus/cdx: not in enabled drivers build config 00:02:44.584 bus/dpaa: not in enabled drivers build config 00:02:44.584 bus/fslmc: not in enabled drivers build config 00:02:44.584 bus/ifpga: not in enabled drivers build config 00:02:44.584 bus/platform: not in enabled drivers build config 00:02:44.584 bus/vmbus: not in enabled drivers build config 00:02:44.584 common/cnxk: not in enabled drivers build config 00:02:44.584 common/mlx5: not in enabled drivers build config 00:02:44.584 common/nfp: not in enabled drivers build config 00:02:44.584 common/qat: not in enabled drivers build config 00:02:44.584 common/sfc_efx: not in enabled drivers build config 00:02:44.584 mempool/bucket: not in enabled drivers build config 00:02:44.584 mempool/cnxk: not in enabled drivers build config 00:02:44.584 mempool/dpaa: not in enabled drivers build config 00:02:44.584 mempool/dpaa2: not in enabled drivers build config 00:02:44.584 mempool/octeontx: not in enabled drivers build config 00:02:44.584 mempool/stack: not in enabled drivers build config 00:02:44.584 dma/cnxk: not in enabled drivers build config 00:02:44.584 dma/dpaa: not in enabled drivers build config 00:02:44.584 dma/dpaa2: not in enabled drivers build config 00:02:44.584 dma/hisilicon: not in enabled drivers build config 00:02:44.584 dma/idxd: not in enabled drivers build config 00:02:44.584 dma/ioat: not in enabled drivers build config 00:02:44.584 dma/skeleton: not in enabled drivers build config 00:02:44.584 net/af_packet: not in enabled drivers build config 00:02:44.584 net/af_xdp: not in enabled drivers build config 00:02:44.585 net/ark: not in enabled drivers build config 00:02:44.585 net/atlantic: not in enabled drivers build config 00:02:44.585 net/avp: not in enabled drivers build config 00:02:44.585 net/axgbe: not in enabled drivers build config 00:02:44.585 net/bnx2x: not in enabled drivers build config 00:02:44.585 net/bnxt: not in enabled drivers build config 00:02:44.585 net/bonding: not in enabled drivers build config 00:02:44.585 net/cnxk: not in enabled drivers build config 00:02:44.585 net/cpfl: not in enabled drivers build config 00:02:44.585 net/cxgbe: not in enabled drivers build config 00:02:44.585 net/dpaa: not in enabled drivers build config 00:02:44.585 net/dpaa2: not in enabled drivers build config 00:02:44.585 net/e1000: not in enabled drivers build config 00:02:44.585 net/ena: not in enabled drivers build config 00:02:44.585 net/enetc: not in enabled drivers build config 00:02:44.585 net/enetfec: not in enabled drivers build config 00:02:44.585 net/enic: not in enabled drivers build config 00:02:44.585 net/failsafe: not in enabled drivers build config 00:02:44.585 net/fm10k: not in enabled drivers build config 00:02:44.585 net/gve: not in enabled drivers build config 00:02:44.585 net/hinic: not in enabled drivers build config 00:02:44.585 net/hns3: not in enabled drivers build config 00:02:44.585 net/iavf: not in enabled drivers build config 00:02:44.585 net/ice: not in enabled drivers build config 00:02:44.585 net/idpf: not in enabled drivers build config 00:02:44.585 net/igc: not in enabled drivers build config 00:02:44.585 net/ionic: not in enabled drivers build config 00:02:44.585 net/ipn3ke: not in enabled drivers build config 00:02:44.585 net/ixgbe: not in enabled drivers build config 00:02:44.585 net/mana: not in enabled drivers build config 00:02:44.585 net/memif: not in enabled drivers build config 00:02:44.585 net/mlx4: not in enabled drivers build config 00:02:44.585 net/mlx5: not in enabled drivers build config 00:02:44.585 net/mvneta: not in enabled drivers build config 00:02:44.585 net/mvpp2: not in enabled drivers build config 00:02:44.585 net/netvsc: not in enabled drivers build config 00:02:44.585 net/nfb: not in enabled drivers build config 00:02:44.585 net/nfp: not in enabled drivers build config 00:02:44.585 net/ngbe: not in enabled drivers build config 00:02:44.585 net/null: not in enabled drivers build config 00:02:44.585 net/octeontx: not in enabled drivers build config 00:02:44.585 net/octeon_ep: not in enabled drivers build config 00:02:44.585 net/pcap: not in enabled drivers build config 00:02:44.585 net/pfe: not in enabled drivers build config 00:02:44.585 net/qede: not in enabled drivers build config 00:02:44.585 net/ring: not in enabled drivers build config 00:02:44.585 net/sfc: not in enabled drivers build config 00:02:44.585 net/softnic: not in enabled drivers build config 00:02:44.585 net/tap: not in enabled drivers build config 00:02:44.585 net/thunderx: not in enabled drivers build config 00:02:44.585 net/txgbe: not in enabled drivers build config 00:02:44.585 net/vdev_netvsc: not in enabled drivers build config 00:02:44.585 net/vhost: not in enabled drivers build config 00:02:44.585 net/virtio: not in enabled drivers build config 00:02:44.585 net/vmxnet3: not in enabled drivers build config 00:02:44.585 raw/cnxk_bphy: not in enabled drivers build config 00:02:44.585 raw/cnxk_gpio: not in enabled drivers build config 00:02:44.585 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:44.585 raw/ifpga: not in enabled drivers build config 00:02:44.585 raw/ntb: not in enabled drivers build config 00:02:44.585 raw/skeleton: not in enabled drivers build config 00:02:44.585 crypto/armv8: not in enabled drivers build config 00:02:44.585 crypto/bcmfs: not in enabled drivers build config 00:02:44.585 crypto/caam_jr: not in enabled drivers build config 00:02:44.585 crypto/ccp: not in enabled drivers build config 00:02:44.585 crypto/cnxk: not in enabled drivers build config 00:02:44.585 crypto/dpaa_sec: not in enabled drivers build config 00:02:44.585 crypto/dpaa2_sec: not in enabled drivers build config 00:02:44.585 crypto/ipsec_mb: not in enabled drivers build config 00:02:44.585 crypto/mlx5: not in enabled drivers build config 00:02:44.585 crypto/mvsam: not in enabled drivers build config 00:02:44.585 crypto/nitrox: not in enabled drivers build config 00:02:44.585 crypto/null: not in enabled drivers build config 00:02:44.585 crypto/octeontx: not in enabled drivers build config 00:02:44.585 crypto/openssl: not in enabled drivers build config 00:02:44.585 crypto/scheduler: not in enabled drivers build config 00:02:44.585 crypto/uadk: not in enabled drivers build config 00:02:44.585 crypto/virtio: not in enabled drivers build config 00:02:44.585 compress/isal: not in enabled drivers build config 00:02:44.585 compress/mlx5: not in enabled drivers build config 00:02:44.585 compress/octeontx: not in enabled drivers build config 00:02:44.585 compress/zlib: not in enabled drivers build config 00:02:44.585 regex/mlx5: not in enabled drivers build config 00:02:44.585 regex/cn9k: not in enabled drivers build config 00:02:44.585 ml/cnxk: not in enabled drivers build config 00:02:44.585 vdpa/ifc: not in enabled drivers build config 00:02:44.585 vdpa/mlx5: not in enabled drivers build config 00:02:44.585 vdpa/nfp: not in enabled drivers build config 00:02:44.585 vdpa/sfc: not in enabled drivers build config 00:02:44.585 event/cnxk: not in enabled drivers build config 00:02:44.585 event/dlb2: not in enabled drivers build config 00:02:44.585 event/dpaa: not in enabled drivers build config 00:02:44.585 event/dpaa2: not in enabled drivers build config 00:02:44.585 event/dsw: not in enabled drivers build config 00:02:44.585 event/opdl: not in enabled drivers build config 00:02:44.585 event/skeleton: not in enabled drivers build config 00:02:44.585 event/sw: not in enabled drivers build config 00:02:44.585 event/octeontx: not in enabled drivers build config 00:02:44.585 baseband/acc: not in enabled drivers build config 00:02:44.585 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:44.585 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:44.585 baseband/la12xx: not in enabled drivers build config 00:02:44.585 baseband/null: not in enabled drivers build config 00:02:44.585 baseband/turbo_sw: not in enabled drivers build config 00:02:44.585 gpu/cuda: not in enabled drivers build config 00:02:44.585 00:02:44.585 00:02:44.585 Build targets in project: 220 00:02:44.585 00:02:44.585 DPDK 23.11.0 00:02:44.585 00:02:44.585 User defined options 00:02:44.585 libdir : lib 00:02:44.585 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:44.585 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:44.585 c_link_args : 00:02:44.585 enable_docs : false 00:02:44.585 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:44.585 enable_kmods : false 00:02:44.585 machine : native 00:02:44.585 tests : false 00:02:44.585 00:02:44.585 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:44.585 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:44.585 06:41:49 -- common/autobuild_common.sh@189 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:44.844 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:44.844 [1/710] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:44.844 [2/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:44.844 [3/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:44.844 [4/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:44.844 [5/710] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:44.844 [6/710] Linking static target lib/librte_kvargs.a 00:02:45.102 [7/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:45.102 [8/710] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:45.102 [9/710] Linking static target lib/librte_log.a 00:02:45.102 [10/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:45.102 [11/710] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.360 [12/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:45.360 [13/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:45.360 [14/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:45.360 [15/710] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.618 [16/710] Linking target lib/librte_log.so.24.0 00:02:45.618 [17/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:45.618 [18/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:45.876 [19/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:45.876 [20/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:45.876 [21/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:45.876 [22/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:45.876 [23/710] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:45.876 [24/710] Linking target lib/librte_kvargs.so.24.0 00:02:46.134 [25/710] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:46.134 [26/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:46.134 [27/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:46.134 [28/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:46.134 [29/710] Linking static target lib/librte_telemetry.a 00:02:46.134 [30/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:46.392 [31/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:46.393 [32/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:46.393 [33/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:46.650 [34/710] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.650 [35/710] Linking target lib/librte_telemetry.so.24.0 00:02:46.650 [36/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:46.650 [37/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:46.650 [38/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:46.650 [39/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:46.650 [40/710] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:46.650 [41/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:46.650 [42/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:46.651 [43/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:46.909 [44/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:46.909 [45/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:47.167 [46/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:47.167 [47/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:47.167 [48/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:47.167 [49/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:47.425 [50/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:47.425 [51/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:47.425 [52/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:47.425 [53/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:47.425 [54/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:47.704 [55/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:47.704 [56/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:47.704 [57/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:47.972 [58/710] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:47.972 [59/710] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:47.972 [60/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:47.972 [61/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:47.973 [62/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:47.973 [63/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:47.973 [64/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:48.231 [65/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:48.231 [66/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:48.231 [67/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:48.231 [68/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:48.489 [69/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:48.489 [70/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:48.489 [71/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:48.489 [72/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:48.489 [73/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:48.489 [74/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:48.747 [75/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:48.747 [76/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:48.747 [77/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:48.747 [78/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:49.005 [79/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:49.005 [80/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:49.264 [81/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:49.264 [82/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:49.264 [83/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:49.264 [84/710] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:49.264 [85/710] Linking static target lib/librte_ring.a 00:02:49.522 [86/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:49.522 [87/710] Linking static target lib/librte_eal.a 00:02:49.522 [88/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:49.522 [89/710] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.522 [90/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:49.780 [91/710] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:49.780 [92/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:49.780 [93/710] Linking static target lib/librte_mempool.a 00:02:49.780 [94/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:49.780 [95/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:50.039 [96/710] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:50.039 [97/710] Linking static target lib/librte_rcu.a 00:02:50.039 [98/710] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:50.039 [99/710] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:50.297 [100/710] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.297 [101/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:50.297 [102/710] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:50.297 [103/710] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.555 [104/710] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:50.555 [105/710] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:50.555 [106/710] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:50.811 [107/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:50.811 [108/710] Linking static target lib/librte_mbuf.a 00:02:50.811 [109/710] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:50.811 [110/710] Linking static target lib/librte_net.a 00:02:51.069 [111/710] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:51.069 [112/710] Linking static target lib/librte_meter.a 00:02:51.069 [113/710] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.069 [114/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:51.069 [115/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:51.327 [116/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:51.327 [117/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:51.327 [118/710] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.327 [119/710] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.894 [120/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:51.894 [121/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:52.152 [122/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:52.152 [123/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:52.410 [124/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:52.410 [125/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:52.410 [126/710] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:52.410 [127/710] Linking static target lib/librte_pci.a 00:02:52.410 [128/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:52.410 [129/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:52.669 [130/710] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.669 [131/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:52.669 [132/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:52.669 [133/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:52.669 [134/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:52.669 [135/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:52.669 [136/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:52.669 [137/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:52.669 [138/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:52.927 [139/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:52.927 [140/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:52.927 [141/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:52.927 [142/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:53.185 [143/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:53.185 [144/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:53.185 [145/710] Linking static target lib/librte_cmdline.a 00:02:53.444 [146/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:53.444 [147/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:53.444 [148/710] Linking static target lib/librte_metrics.a 00:02:53.444 [149/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:53.702 [150/710] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:53.960 [151/710] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.218 [152/710] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:54.218 [153/710] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:54.218 [154/710] Linking static target lib/librte_timer.a 00:02:54.218 [155/710] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.476 [156/710] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.734 [157/710] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:54.734 [158/710] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:54.992 [159/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:54.992 [160/710] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:55.558 [161/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:55.558 [162/710] Linking static target lib/librte_ethdev.a 00:02:55.558 [163/710] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:55.558 [164/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:55.817 [165/710] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:55.817 [166/710] Linking static target lib/librte_bitratestats.a 00:02:55.817 [167/710] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:55.817 [168/710] Linking static target lib/librte_bbdev.a 00:02:55.817 [169/710] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.817 [170/710] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.817 [171/710] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:56.075 [172/710] Linking static target lib/librte_hash.a 00:02:56.075 [173/710] Linking target lib/librte_eal.so.24.0 00:02:56.075 [174/710] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:56.075 [175/710] Linking target lib/librte_ring.so.24.0 00:02:56.334 [176/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:56.334 [177/710] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:56.334 [178/710] Linking target lib/librte_meter.so.24.0 00:02:56.334 [179/710] Linking target lib/librte_rcu.so.24.0 00:02:56.334 [180/710] Linking target lib/librte_mempool.so.24.0 00:02:56.334 [181/710] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:56.593 [182/710] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.593 [183/710] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:56.593 [184/710] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.593 [185/710] Linking static target lib/acl/libavx2_tmp.a 00:02:56.593 [186/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:56.593 [187/710] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:56.593 [188/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:56.593 [189/710] Linking target lib/librte_pci.so.24.0 00:02:56.593 [190/710] Linking target lib/librte_timer.so.24.0 00:02:56.593 [191/710] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:56.593 [192/710] Linking target lib/librte_mbuf.so.24.0 00:02:56.593 [193/710] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:56.593 [194/710] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:56.593 [195/710] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:56.851 [196/710] Linking target lib/librte_net.so.24.0 00:02:56.851 [197/710] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:56.851 [198/710] Linking static target lib/acl/libavx512_tmp.a 00:02:56.851 [199/710] Linking target lib/librte_bbdev.so.24.0 00:02:56.851 [200/710] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:56.851 [201/710] Linking target lib/librte_cmdline.so.24.0 00:02:56.851 [202/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:57.109 [203/710] Linking static target lib/librte_acl.a 00:02:57.109 [204/710] Linking target lib/librte_hash.so.24.0 00:02:57.109 [205/710] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:57.109 [206/710] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:57.109 [207/710] Linking static target lib/librte_cfgfile.a 00:02:57.109 [208/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:57.367 [209/710] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.367 [210/710] Linking target lib/librte_acl.so.24.0 00:02:57.367 [211/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:57.367 [212/710] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:02:57.367 [213/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:57.367 [214/710] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.625 [215/710] Linking target lib/librte_cfgfile.so.24.0 00:02:57.625 [216/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:57.883 [217/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:57.883 [218/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:57.883 [219/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:57.883 [220/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:57.883 [221/710] Linking static target lib/librte_bpf.a 00:02:58.142 [222/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:58.142 [223/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:58.142 [224/710] Linking static target lib/librte_compressdev.a 00:02:58.400 [225/710] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.400 [226/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:58.400 [227/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:58.658 [228/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:58.658 [229/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:58.658 [230/710] Linking static target lib/librte_distributor.a 00:02:58.658 [231/710] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.916 [232/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:58.916 [233/710] Linking target lib/librte_compressdev.so.24.0 00:02:58.916 [234/710] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.916 [235/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:58.916 [236/710] Linking target lib/librte_distributor.so.24.0 00:02:59.174 [237/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:59.174 [238/710] Linking static target lib/librte_dmadev.a 00:02:59.445 [239/710] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.445 [240/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:59.445 [241/710] Linking target lib/librte_dmadev.so.24.0 00:02:59.719 [242/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:59.719 [243/710] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:59.978 [244/710] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:59.978 [245/710] Linking static target lib/librte_efd.a 00:02:59.978 [246/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:03:00.236 [247/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:00.236 [248/710] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.236 [249/710] Linking static target lib/librte_cryptodev.a 00:03:00.236 [250/710] Linking target lib/librte_efd.so.24.0 00:03:00.494 [251/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:03:00.494 [252/710] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:03:00.494 [253/710] Linking static target lib/librte_dispatcher.a 00:03:00.753 [254/710] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.753 [255/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:03:00.753 [256/710] Linking target lib/librte_ethdev.so.24.0 00:03:00.753 [257/710] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:03:00.753 [258/710] Linking target lib/librte_metrics.so.24.0 00:03:01.011 [259/710] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:03:01.011 [260/710] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.011 [261/710] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:03:01.011 [262/710] Linking target lib/librte_bpf.so.24.0 00:03:01.011 [263/710] Linking static target lib/librte_gpudev.a 00:03:01.011 [264/710] Linking target lib/librte_bitratestats.so.24.0 00:03:01.011 [265/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:03:01.011 [266/710] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:03:01.269 [267/710] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:03:01.269 [268/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:03:01.528 [269/710] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.528 [270/710] Linking target lib/librte_cryptodev.so.24.0 00:03:01.528 [271/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:03:01.528 [272/710] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:03:01.528 [273/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:03:01.786 [274/710] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.786 [275/710] Linking target lib/librte_gpudev.so.24.0 00:03:01.786 [276/710] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:03:02.044 [277/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:03:02.044 [278/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:03:02.044 [279/710] Linking static target lib/librte_gro.a 00:03:02.044 [280/710] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:03:02.044 [281/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:03:02.044 [282/710] Linking static target lib/librte_eventdev.a 00:03:02.044 [283/710] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:03:02.044 [284/710] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:03:02.302 [285/710] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.302 [286/710] Linking target lib/librte_gro.so.24.0 00:03:02.302 [287/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:03:02.302 [288/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:03:02.561 [289/710] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:03:02.561 [290/710] Linking static target lib/librte_gso.a 00:03:02.561 [291/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:03:02.819 [292/710] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.819 [293/710] Linking target lib/librte_gso.so.24.0 00:03:02.819 [294/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:03:02.819 [295/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:03:02.819 [296/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:03:02.819 [297/710] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:03:02.819 [298/710] Linking static target lib/librte_jobstats.a 00:03:03.077 [299/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:03:03.077 [300/710] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:03:03.077 [301/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:03:03.077 [302/710] Linking static target lib/librte_latencystats.a 00:03:03.077 [303/710] Linking static target lib/librte_ip_frag.a 00:03:03.335 [304/710] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.335 [305/710] Linking target lib/librte_jobstats.so.24.0 00:03:03.335 [306/710] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.335 [307/710] Linking target lib/librte_latencystats.so.24.0 00:03:03.335 [308/710] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.335 [309/710] Linking target lib/librte_ip_frag.so.24.0 00:03:03.593 [310/710] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:03:03.593 [311/710] Linking static target lib/member/libsketch_avx512_tmp.a 00:03:03.593 [312/710] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:03:03.593 [313/710] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:03:03.593 [314/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:03:03.593 [315/710] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:03.593 [316/710] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:03.851 [317/710] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:04.109 [318/710] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.109 [319/710] Linking target lib/librte_eventdev.so.24.0 00:03:04.109 [320/710] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:03:04.109 [321/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:03:04.109 [322/710] Linking static target lib/librte_lpm.a 00:03:04.367 [323/710] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:03:04.367 [324/710] Linking target lib/librte_dispatcher.so.24.0 00:03:04.367 [325/710] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:04.367 [326/710] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:04.367 [327/710] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:03:04.367 [328/710] Linking static target lib/librte_pcapng.a 00:03:04.367 [329/710] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:04.626 [330/710] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:04.626 [331/710] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:03:04.626 [332/710] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.626 [333/710] Linking target lib/librte_lpm.so.24.0 00:03:04.626 [334/710] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.626 [335/710] Linking target lib/librte_pcapng.so.24.0 00:03:04.884 [336/710] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:03:04.884 [337/710] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:03:04.884 [338/710] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:04.884 [339/710] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:05.142 [340/710] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:05.142 [341/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:03:05.142 [342/710] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:05.142 [343/710] Linking static target lib/librte_power.a 00:03:05.401 [344/710] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:03:05.401 [345/710] Linking static target lib/librte_regexdev.a 00:03:05.401 [346/710] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:03:05.401 [347/710] Linking static target lib/librte_rawdev.a 00:03:05.401 [348/710] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:03:05.401 [349/710] Linking static target lib/librte_member.a 00:03:05.401 [350/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:03:05.659 [351/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:03:05.659 [352/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:03:05.659 [353/710] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.659 [354/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:03:05.659 [355/710] Linking static target lib/librte_mldev.a 00:03:05.917 [356/710] Linking target lib/librte_member.so.24.0 00:03:05.917 [357/710] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.917 [358/710] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.917 [359/710] Linking target lib/librte_rawdev.so.24.0 00:03:05.917 [360/710] Linking target lib/librte_power.so.24.0 00:03:05.917 [361/710] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:03:05.917 [362/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:03:06.175 [363/710] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.175 [364/710] Linking target lib/librte_regexdev.so.24.0 00:03:06.175 [365/710] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:03:06.434 [366/710] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:06.434 [367/710] Linking static target lib/librte_reorder.a 00:03:06.434 [368/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:03:06.434 [369/710] Linking static target lib/librte_rib.a 00:03:06.434 [370/710] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:06.434 [371/710] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:03:06.434 [372/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:03:06.692 [373/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:03:06.692 [374/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:03:06.692 [375/710] Linking static target lib/librte_stack.a 00:03:06.692 [376/710] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.692 [377/710] Linking target lib/librte_reorder.so.24.0 00:03:06.950 [378/710] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:06.950 [379/710] Linking static target lib/librte_security.a 00:03:06.950 [380/710] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.950 [381/710] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:03:06.950 [382/710] Linking target lib/librte_rib.so.24.0 00:03:06.950 [383/710] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.950 [384/710] Linking target lib/librte_stack.so.24.0 00:03:06.950 [385/710] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.950 [386/710] Linking target lib/librte_mldev.so.24.0 00:03:06.950 [387/710] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:03:07.208 [388/710] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.208 [389/710] Linking target lib/librte_security.so.24.0 00:03:07.208 [390/710] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:07.208 [391/710] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:07.467 [392/710] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:03:07.467 [393/710] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:07.725 [394/710] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:03:07.725 [395/710] Linking static target lib/librte_sched.a 00:03:07.983 [396/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:07.983 [397/710] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.241 [398/710] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:08.241 [399/710] Linking target lib/librte_sched.so.24.0 00:03:08.241 [400/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:08.241 [401/710] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:03:08.499 [402/710] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:03:08.499 [403/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:03:08.757 [404/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:09.015 [405/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:03:09.015 [406/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:03:09.015 [407/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:03:09.273 [408/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:03:09.273 [409/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:03:09.273 [410/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:03:09.532 [411/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:03:09.532 [412/710] Linking static target lib/librte_ipsec.a 00:03:09.532 [413/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:03:09.790 [414/710] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:03:09.790 [415/710] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:03:09.790 [416/710] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.790 [417/710] Linking target lib/librte_ipsec.so.24.0 00:03:09.790 [418/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:03:09.790 [419/710] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:03:09.790 [420/710] Linking static target lib/fib/libtrie_avx512_tmp.a 00:03:10.048 [421/710] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:03:10.048 [422/710] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:03:10.048 [423/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:03:11.032 [424/710] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:03:11.032 [425/710] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:03:11.032 [426/710] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:03:11.032 [427/710] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:03:11.032 [428/710] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:03:11.032 [429/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:03:11.032 [430/710] Linking static target lib/librte_pdcp.a 00:03:11.032 [431/710] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:03:11.032 [432/710] Linking static target lib/librte_fib.a 00:03:11.290 [433/710] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.290 [434/710] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.290 [435/710] Linking target lib/librte_pdcp.so.24.0 00:03:11.290 [436/710] Linking target lib/librte_fib.so.24.0 00:03:11.548 [437/710] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:03:11.806 [438/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:03:12.064 [439/710] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:03:12.064 [440/710] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:03:12.064 [441/710] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:03:12.064 [442/710] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:03:12.323 [443/710] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:03:12.323 [444/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:03:12.581 [445/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:03:12.581 [446/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:03:12.581 [447/710] Linking static target lib/librte_port.a 00:03:12.839 [448/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:03:12.839 [449/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:03:12.839 [450/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:03:12.839 [451/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:03:13.097 [452/710] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.097 [453/710] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:03:13.097 [454/710] Linking target lib/librte_port.so.24.0 00:03:13.097 [455/710] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:03:13.097 [456/710] Linking static target lib/librte_pdump.a 00:03:13.097 [457/710] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:03:13.355 [458/710] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:03:13.355 [459/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:13.355 [460/710] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.355 [461/710] Linking target lib/librte_pdump.so.24.0 00:03:13.614 [462/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:03:13.872 [463/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:03:14.131 [464/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:03:14.131 [465/710] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:03:14.131 [466/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:03:14.131 [467/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:03:14.131 [468/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:03:14.393 [469/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:03:14.653 [470/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:03:14.653 [471/710] Linking static target lib/librte_table.a 00:03:14.653 [472/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:03:14.653 [473/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:03:15.220 [474/710] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.220 [475/710] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:03:15.220 [476/710] Linking target lib/librte_table.so.24.0 00:03:15.478 [477/710] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:03:15.478 [478/710] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:03:15.478 [479/710] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:03:15.737 [480/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:03:15.995 [481/710] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:03:15.995 [482/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:03:15.995 [483/710] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:16.253 [484/710] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:03:16.253 [485/710] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:16.253 [486/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:03:16.820 [487/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:16.820 [488/710] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:16.820 [489/710] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:16.820 [490/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:03:16.820 [491/710] Linking static target lib/librte_graph.a 00:03:16.820 [492/710] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:17.079 [493/710] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:03:17.646 [494/710] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:17.646 [495/710] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:03:17.646 [496/710] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.646 [497/710] Linking target lib/librte_graph.so.24.0 00:03:17.646 [498/710] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:03:17.646 [499/710] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:17.904 [500/710] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:03:18.163 [501/710] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:18.163 [502/710] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:18.163 [503/710] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:03:18.163 [504/710] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:03:18.163 [505/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:18.422 [506/710] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:03:18.680 [507/710] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:18.680 [508/710] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:18.939 [509/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:18.939 [510/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:18.939 [511/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:18.939 [512/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:18.939 [513/710] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:03:19.198 [514/710] Linking static target lib/librte_node.a 00:03:19.198 [515/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:19.456 [516/710] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.456 [517/710] Linking target lib/librte_node.so.24.0 00:03:19.456 [518/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:19.456 [519/710] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:19.456 [520/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:19.456 [521/710] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:19.715 [522/710] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:19.715 [523/710] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:19.715 [524/710] Linking static target drivers/librte_bus_vdev.a 00:03:19.715 [525/710] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:19.715 [526/710] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:19.715 [527/710] Linking static target drivers/librte_bus_pci.a 00:03:19.974 [528/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:19.974 [529/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:19.974 [530/710] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:19.974 [531/710] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:19.974 [532/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:19.974 [533/710] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.233 [534/710] Linking target drivers/librte_bus_vdev.so.24.0 00:03:20.233 [535/710] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:20.233 [536/710] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:20.233 [537/710] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:03:20.233 [538/710] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.491 [539/710] Linking target drivers/librte_bus_pci.so.24.0 00:03:20.491 [540/710] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:20.491 [541/710] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:20.491 [542/710] Linking static target drivers/librte_mempool_ring.a 00:03:20.491 [543/710] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:20.491 [544/710] Linking target drivers/librte_mempool_ring.so.24.0 00:03:20.491 [545/710] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:03:20.752 [546/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:21.013 [547/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:21.271 [548/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:21.529 [549/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:21.529 [550/710] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:21.529 [551/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:22.465 [552/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:22.465 [553/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:22.465 [554/710] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:03:22.465 [555/710] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:03:22.465 [556/710] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:22.465 [557/710] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:23.033 [558/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:23.033 [559/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:23.292 [560/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:23.292 [561/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:03:23.292 [562/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:23.860 [563/710] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:03:24.119 [564/710] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:03:24.119 [565/710] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:03:24.119 [566/710] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:24.377 [567/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:24.636 [568/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:24.636 [569/710] Linking static target lib/librte_vhost.a 00:03:24.636 [570/710] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:03:24.636 [571/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:24.636 [572/710] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:03:24.636 [573/710] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:03:24.895 [574/710] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:03:24.895 [575/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:25.154 [576/710] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:03:25.412 [577/710] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:03:25.412 [578/710] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:03:25.412 [579/710] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:03:25.412 [580/710] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:03:25.412 [581/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:25.412 [582/710] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:25.688 [583/710] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.968 [584/710] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:25.968 [585/710] Linking target lib/librte_vhost.so.24.0 00:03:25.968 [586/710] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:25.968 [587/710] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:25.968 [588/710] Linking static target drivers/librte_net_i40e.a 00:03:25.968 [589/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:25.968 [590/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:25.968 [591/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:26.226 [592/710] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:26.226 [593/710] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:26.226 [594/710] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:26.485 [595/710] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.744 [596/710] Linking target drivers/librte_net_i40e.so.24.0 00:03:26.744 [597/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:26.744 [598/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:26.744 [599/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:27.312 [600/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:27.312 [601/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:27.312 [602/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:27.312 [603/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:27.571 [604/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:27.571 [605/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:27.571 [606/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:27.830 [607/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:28.089 [608/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:28.089 [609/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:28.348 [610/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:28.348 [611/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:28.348 [612/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:28.348 [613/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:28.607 [614/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:28.607 [615/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:03:28.607 [616/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:28.607 [617/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:28.866 [618/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:29.125 [619/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:29.125 [620/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:03:29.385 [621/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:29.385 [622/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:29.385 [623/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:30.322 [624/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:30.322 [625/710] Linking static target lib/librte_pipeline.a 00:03:30.322 [626/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:30.322 [627/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:30.322 [628/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:30.581 [629/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:30.581 [630/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:30.581 [631/710] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:30.840 [632/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:30.840 [633/710] Linking target app/dpdk-dumpcap 00:03:30.840 [634/710] Linking target app/dpdk-graph 00:03:30.840 [635/710] Linking target app/dpdk-pdump 00:03:30.840 [636/710] Linking target app/dpdk-proc-info 00:03:31.099 [637/710] Linking target app/dpdk-test-acl 00:03:31.099 [638/710] Linking target app/dpdk-test-cmdline 00:03:31.099 [639/710] Linking target app/dpdk-test-crypto-perf 00:03:31.099 [640/710] Linking target app/dpdk-test-compress-perf 00:03:31.357 [641/710] Linking target app/dpdk-test-dma-perf 00:03:31.357 [642/710] Linking target app/dpdk-test-fib 00:03:31.357 [643/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:03:31.616 [644/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:03:31.616 [645/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:31.874 [646/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:31.874 [647/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:03:31.874 [648/710] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:32.133 [649/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:32.133 [650/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:03:32.133 [651/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:03:32.390 [652/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:03:32.390 [653/710] Linking target app/dpdk-test-gpudev 00:03:32.390 [654/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:03:32.390 [655/710] Linking target app/dpdk-test-eventdev 00:03:32.649 [656/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:03:32.649 [657/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:32.907 [658/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:03:32.907 [659/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:03:32.907 [660/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:03:32.907 [661/710] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.165 [662/710] Linking target app/dpdk-test-flow-perf 00:03:33.165 [663/710] Linking target lib/librte_pipeline.so.24.0 00:03:33.165 [664/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:33.165 [665/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:33.165 [666/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:33.165 [667/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:33.733 [668/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:33.733 [669/710] Linking target app/dpdk-test-bbdev 00:03:33.733 [670/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:33.733 [671/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:33.733 [672/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:33.733 [673/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:33.992 [674/710] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:33.992 [675/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:34.251 [676/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:03:34.251 [677/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:03:34.510 [678/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:34.510 [679/710] Linking target app/dpdk-test-pipeline 00:03:34.768 [680/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:34.768 [681/710] Linking target app/dpdk-test-mldev 00:03:34.768 [682/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:35.027 [683/710] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:35.286 [684/710] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:35.286 [685/710] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:35.545 [686/710] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:35.545 [687/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:35.545 [688/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:35.804 [689/710] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:35.804 [690/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:36.066 [691/710] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:03:36.066 [692/710] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:36.066 [693/710] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:36.634 [694/710] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:36.894 [695/710] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:36.894 [696/710] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:37.153 [697/710] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:37.153 [698/710] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:37.153 [699/710] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:37.412 [700/710] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:37.412 [701/710] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:37.672 [702/710] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:37.672 [703/710] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:37.672 [704/710] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:37.672 [705/710] Linking target app/dpdk-test-regex 00:03:37.672 [706/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:37.931 [707/710] Linking target app/dpdk-test-sad 00:03:38.190 [708/710] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:38.190 [709/710] Linking target app/dpdk-testpmd 00:03:38.449 [710/710] Linking target app/dpdk-test-security-perf 00:03:38.449 06:42:42 -- common/autobuild_common.sh@190 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:38.709 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:38.709 [0/1] Installing files. 00:03:38.971 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:38.973 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:38.974 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:38.975 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:38.975 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:38.975 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:38.975 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:38.975 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:38.975 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:38.975 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:38.975 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:38.975 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:38.975 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:38.975 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:38.975 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:38.975 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:38.975 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:38.975 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:38.975 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:38.975 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:38.975 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:38.975 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:38.975 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:38.975 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:38.975 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:38.975 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:38.975 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:38.975 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:38.975 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:38.975 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:38.975 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:38.975 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:38.975 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:38.975 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:38.975 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:38.975 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:38.975 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:38.975 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:38.975 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:38.975 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:38.975 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:38.975 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:38.975 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:38.975 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:38.975 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:38.975 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:38.975 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:38.975 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:38.975 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:38.975 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:38.975 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:38.975 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:38.975 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:38.975 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:38.975 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:38.975 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:38.975 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:38.975 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:38.975 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:38.975 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:38.975 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:38.975 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:38.975 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.975 Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.235 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.235 Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.235 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.235 Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.235 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.235 Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.235 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.235 Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.235 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.235 Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.235 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.235 Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.235 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.235 Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.235 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.235 Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.235 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.235 Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.235 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.235 Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.235 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.235 Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.235 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.235 Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.235 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.235 Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.235 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.235 Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.235 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.235 Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.235 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.235 Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.235 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.235 Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.235 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.235 Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.235 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.235 Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.236 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.236 Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.236 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.236 Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.236 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.236 Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.236 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.236 Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.236 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.236 Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.236 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.236 Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.236 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.236 Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.236 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.236 Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.236 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.236 Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.236 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.236 Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.236 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.236 Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.236 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.236 Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.236 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.236 Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.236 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.236 Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.236 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.236 Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.236 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.236 Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.236 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.236 Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.236 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.236 Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.236 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.236 Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.236 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.236 Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.236 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.236 Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.236 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.236 Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.236 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.236 Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.236 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.236 Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.236 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.236 Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.236 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.236 Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.236 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.236 Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.236 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.236 Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.236 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.236 Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.236 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.236 Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.236 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.236 Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.236 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.236 Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.236 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.236 Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.236 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.236 Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.236 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.236 Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.498 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.498 Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.498 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.498 Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:39.498 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.499 Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:39.499 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.499 Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:39.499 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.499 Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:39.499 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:39.499 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:39.499 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:39.499 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:39.499 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:39.499 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:39.499 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:39.499 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:39.499 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:39.499 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:39.499 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:39.499 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:39.499 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:39.499 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:39.499 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:39.499 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:39.499 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:39.499 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:39.499 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:39.499 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.501 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.502 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.502 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.502 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.502 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.502 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.502 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.502 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.502 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:39.502 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:39.502 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:39.502 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:39.502 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:39.502 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:39.502 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.502 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:39.502 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:39.502 Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:03:39.502 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:03:39.502 Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:03:39.502 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:39.502 Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:03:39.502 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:39.502 Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:03:39.502 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:39.502 Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:03:39.502 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:39.502 Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:03:39.502 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:39.502 Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:03:39.502 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:39.502 Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:03:39.502 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:39.502 Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:03:39.502 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:39.502 Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:03:39.502 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:39.502 Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:03:39.502 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:39.502 Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:03:39.502 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:39.502 Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:03:39.502 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:39.502 Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:03:39.502 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:39.502 Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:03:39.502 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:39.502 Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:03:39.502 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:39.502 Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:03:39.502 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:39.502 Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:03:39.502 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:39.502 Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:03:39.502 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:39.502 Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:03:39.502 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:39.502 Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:03:39.502 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:39.502 Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:03:39.502 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:39.502 Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:03:39.502 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:39.502 Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:03:39.502 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:39.502 Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:03:39.502 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:39.502 Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:03:39.502 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:39.502 Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:03:39.502 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:39.502 Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:03:39.502 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:03:39.502 Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:03:39.502 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:39.502 Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:03:39.502 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:39.502 Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:03:39.502 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:39.502 Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:03:39.502 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:39.502 Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:03:39.502 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:39.502 Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:03:39.502 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:39.502 Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:03:39.502 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:39.502 Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:03:39.502 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:39.502 Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:03:39.502 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:39.502 Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:03:39.502 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:39.502 Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:03:39.502 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:39.502 Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:03:39.502 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:39.502 Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:03:39.502 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:03:39.502 Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:03:39.502 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:39.502 Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:03:39.502 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:39.502 Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:03:39.502 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:39.502 Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:03:39.502 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:39.502 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:03:39.502 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:03:39.502 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:03:39.503 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:03:39.503 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:03:39.503 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:03:39.503 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:03:39.503 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:03:39.503 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:03:39.503 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:03:39.503 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:03:39.503 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:03:39.503 Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:03:39.503 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:39.503 Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:03:39.503 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:39.503 Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:03:39.503 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:39.503 Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:03:39.503 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:03:39.503 Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:03:39.503 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:39.503 Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:03:39.503 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:39.503 Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:03:39.503 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:39.503 Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:03:39.503 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:39.503 Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:03:39.503 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:39.503 Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:03:39.503 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:39.503 Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:03:39.503 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:39.503 Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:03:39.503 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:03:39.503 Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:03:39.503 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:03:39.503 Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:03:39.503 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:03:39.503 Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:03:39.503 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:03:39.503 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:03:39.503 06:42:44 -- common/autobuild_common.sh@192 -- $ uname -s 00:03:39.762 06:42:44 -- common/autobuild_common.sh@192 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:39.762 06:42:44 -- common/autobuild_common.sh@203 -- $ cat 00:03:39.762 ************************************ 00:03:39.762 END TEST build_native_dpdk 00:03:39.762 ************************************ 00:03:39.762 06:42:44 -- common/autobuild_common.sh@208 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:39.762 00:03:39.762 real 1m2.326s 00:03:39.762 user 7m40.410s 00:03:39.762 sys 1m6.242s 00:03:39.762 06:42:44 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:03:39.762 06:42:44 -- common/autotest_common.sh@10 -- $ set +x 00:03:39.762 06:42:44 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:39.762 06:42:44 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:39.762 06:42:44 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:39.762 06:42:44 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:39.762 06:42:44 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:39.762 06:42:44 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:39.762 06:42:44 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:39.763 06:42:44 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-shared 00:03:39.763 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:40.021 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.021 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:40.021 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:40.281 Using 'verbs' RDMA provider 00:03:53.421 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:04:05.626 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:04:05.884 Creating mk/config.mk...done. 00:04:05.884 Creating mk/cc.flags.mk...done. 00:04:05.884 Type 'make' to build. 00:04:05.884 06:43:10 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:04:05.884 06:43:10 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:04:05.884 06:43:10 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:04:05.884 06:43:10 -- common/autotest_common.sh@10 -- $ set +x 00:04:05.884 ************************************ 00:04:05.884 START TEST make 00:04:05.884 ************************************ 00:04:05.884 06:43:10 -- common/autotest_common.sh@1114 -- $ make -j10 00:04:06.142 make[1]: Nothing to be done for 'all'. 00:04:32.682 CC lib/ut/ut.o 00:04:32.682 CC lib/ut_mock/mock.o 00:04:32.682 CC lib/log/log.o 00:04:32.682 CC lib/log/log_flags.o 00:04:32.682 CC lib/log/log_deprecated.o 00:04:32.682 LIB libspdk_ut_mock.a 00:04:32.682 LIB libspdk_ut.a 00:04:32.682 LIB libspdk_log.a 00:04:32.682 SO libspdk_ut_mock.so.5.0 00:04:32.682 SO libspdk_ut.so.1.0 00:04:32.682 SO libspdk_log.so.6.1 00:04:32.682 SYMLINK libspdk_ut_mock.so 00:04:32.682 SYMLINK libspdk_ut.so 00:04:32.682 SYMLINK libspdk_log.so 00:04:32.682 CC lib/dma/dma.o 00:04:32.682 CC lib/util/bit_array.o 00:04:32.682 CC lib/util/base64.o 00:04:32.682 CC lib/util/cpuset.o 00:04:32.682 CC lib/util/crc16.o 00:04:32.682 CC lib/util/crc32.o 00:04:32.682 CC lib/util/crc32c.o 00:04:32.682 CC lib/ioat/ioat.o 00:04:32.682 CXX lib/trace_parser/trace.o 00:04:32.682 CC lib/vfio_user/host/vfio_user_pci.o 00:04:32.682 CC lib/util/crc32_ieee.o 00:04:32.682 CC lib/util/crc64.o 00:04:32.682 CC lib/util/dif.o 00:04:32.682 CC lib/util/fd.o 00:04:32.682 LIB libspdk_dma.a 00:04:32.682 SO libspdk_dma.so.3.0 00:04:32.682 CC lib/util/file.o 00:04:32.682 LIB libspdk_ioat.a 00:04:32.682 CC lib/util/hexlify.o 00:04:32.682 SO libspdk_ioat.so.6.0 00:04:32.682 SYMLINK libspdk_dma.so 00:04:32.682 CC lib/vfio_user/host/vfio_user.o 00:04:32.682 CC lib/util/iov.o 00:04:32.682 CC lib/util/math.o 00:04:32.682 CC lib/util/pipe.o 00:04:32.682 SYMLINK libspdk_ioat.so 00:04:32.682 CC lib/util/strerror_tls.o 00:04:32.682 CC lib/util/string.o 00:04:32.682 CC lib/util/uuid.o 00:04:32.682 CC lib/util/fd_group.o 00:04:32.682 CC lib/util/xor.o 00:04:32.682 CC lib/util/zipf.o 00:04:32.682 LIB libspdk_vfio_user.a 00:04:32.682 SO libspdk_vfio_user.so.4.0 00:04:32.682 SYMLINK libspdk_vfio_user.so 00:04:32.682 LIB libspdk_util.a 00:04:32.682 SO libspdk_util.so.8.0 00:04:32.682 SYMLINK libspdk_util.so 00:04:32.682 CC lib/conf/conf.o 00:04:32.682 CC lib/vmd/vmd.o 00:04:32.682 CC lib/rdma/common.o 00:04:32.682 CC lib/vmd/led.o 00:04:32.682 CC lib/rdma/rdma_verbs.o 00:04:32.682 CC lib/env_dpdk/env.o 00:04:32.682 CC lib/env_dpdk/memory.o 00:04:32.682 CC lib/json/json_parse.o 00:04:32.682 CC lib/idxd/idxd.o 00:04:32.682 LIB libspdk_trace_parser.a 00:04:32.682 SO libspdk_trace_parser.so.4.0 00:04:32.682 CC lib/env_dpdk/pci.o 00:04:32.682 SYMLINK libspdk_trace_parser.so 00:04:32.682 CC lib/env_dpdk/init.o 00:04:32.682 LIB libspdk_conf.a 00:04:32.682 SO libspdk_conf.so.5.0 00:04:32.682 CC lib/json/json_util.o 00:04:32.682 CC lib/json/json_write.o 00:04:32.682 LIB libspdk_rdma.a 00:04:32.682 SYMLINK libspdk_conf.so 00:04:32.682 CC lib/env_dpdk/threads.o 00:04:32.682 SO libspdk_rdma.so.5.0 00:04:32.682 SYMLINK libspdk_rdma.so 00:04:32.682 CC lib/env_dpdk/pci_ioat.o 00:04:32.682 CC lib/env_dpdk/pci_virtio.o 00:04:32.682 CC lib/env_dpdk/pci_vmd.o 00:04:32.682 CC lib/env_dpdk/pci_idxd.o 00:04:32.682 CC lib/idxd/idxd_user.o 00:04:32.682 CC lib/env_dpdk/pci_event.o 00:04:32.682 LIB libspdk_json.a 00:04:32.682 CC lib/env_dpdk/sigbus_handler.o 00:04:32.682 CC lib/env_dpdk/pci_dpdk.o 00:04:32.682 SO libspdk_json.so.5.1 00:04:32.682 LIB libspdk_vmd.a 00:04:32.682 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:32.682 SO libspdk_vmd.so.5.0 00:04:32.682 SYMLINK libspdk_json.so 00:04:32.683 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:32.683 CC lib/idxd/idxd_kernel.o 00:04:32.683 SYMLINK libspdk_vmd.so 00:04:32.683 CC lib/jsonrpc/jsonrpc_server.o 00:04:32.683 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:32.683 CC lib/jsonrpc/jsonrpc_client.o 00:04:32.683 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:32.683 LIB libspdk_idxd.a 00:04:32.683 SO libspdk_idxd.so.11.0 00:04:32.683 SYMLINK libspdk_idxd.so 00:04:32.683 LIB libspdk_jsonrpc.a 00:04:32.683 SO libspdk_jsonrpc.so.5.1 00:04:32.683 SYMLINK libspdk_jsonrpc.so 00:04:32.683 CC lib/rpc/rpc.o 00:04:32.683 LIB libspdk_env_dpdk.a 00:04:32.683 LIB libspdk_rpc.a 00:04:32.941 SO libspdk_rpc.so.5.0 00:04:32.941 SO libspdk_env_dpdk.so.13.0 00:04:32.941 SYMLINK libspdk_rpc.so 00:04:32.941 SYMLINK libspdk_env_dpdk.so 00:04:32.941 CC lib/notify/notify.o 00:04:32.941 CC lib/notify/notify_rpc.o 00:04:32.941 CC lib/trace/trace_flags.o 00:04:32.941 CC lib/trace/trace.o 00:04:32.941 CC lib/trace/trace_rpc.o 00:04:32.941 CC lib/sock/sock.o 00:04:32.941 CC lib/sock/sock_rpc.o 00:04:33.199 LIB libspdk_notify.a 00:04:33.199 SO libspdk_notify.so.5.0 00:04:33.199 LIB libspdk_trace.a 00:04:33.199 SO libspdk_trace.so.9.0 00:04:33.199 SYMLINK libspdk_notify.so 00:04:33.458 SYMLINK libspdk_trace.so 00:04:33.458 LIB libspdk_sock.a 00:04:33.458 SO libspdk_sock.so.8.0 00:04:33.458 SYMLINK libspdk_sock.so 00:04:33.458 CC lib/thread/iobuf.o 00:04:33.458 CC lib/thread/thread.o 00:04:33.716 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:33.716 CC lib/nvme/nvme_fabric.o 00:04:33.716 CC lib/nvme/nvme_ctrlr.o 00:04:33.716 CC lib/nvme/nvme_ns_cmd.o 00:04:33.716 CC lib/nvme/nvme_ns.o 00:04:33.716 CC lib/nvme/nvme_pcie_common.o 00:04:33.716 CC lib/nvme/nvme_qpair.o 00:04:33.716 CC lib/nvme/nvme_pcie.o 00:04:33.975 CC lib/nvme/nvme.o 00:04:34.541 CC lib/nvme/nvme_quirks.o 00:04:34.541 CC lib/nvme/nvme_transport.o 00:04:34.541 CC lib/nvme/nvme_discovery.o 00:04:34.799 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:34.799 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:34.799 CC lib/nvme/nvme_tcp.o 00:04:34.799 CC lib/nvme/nvme_opal.o 00:04:34.799 CC lib/nvme/nvme_io_msg.o 00:04:35.058 CC lib/nvme/nvme_poll_group.o 00:04:35.058 LIB libspdk_thread.a 00:04:35.316 SO libspdk_thread.so.9.0 00:04:35.316 CC lib/nvme/nvme_zns.o 00:04:35.316 SYMLINK libspdk_thread.so 00:04:35.316 CC lib/nvme/nvme_cuse.o 00:04:35.316 CC lib/nvme/nvme_vfio_user.o 00:04:35.316 CC lib/nvme/nvme_rdma.o 00:04:35.575 CC lib/accel/accel.o 00:04:35.575 CC lib/blob/blobstore.o 00:04:35.575 CC lib/blob/request.o 00:04:35.833 CC lib/blob/zeroes.o 00:04:35.833 CC lib/blob/blob_bs_dev.o 00:04:35.833 CC lib/accel/accel_rpc.o 00:04:36.091 CC lib/accel/accel_sw.o 00:04:36.091 CC lib/init/json_config.o 00:04:36.091 CC lib/init/subsystem.o 00:04:36.091 CC lib/virtio/virtio.o 00:04:36.091 CC lib/init/subsystem_rpc.o 00:04:36.349 CC lib/virtio/virtio_vhost_user.o 00:04:36.349 CC lib/init/rpc.o 00:04:36.349 CC lib/virtio/virtio_vfio_user.o 00:04:36.349 CC lib/virtio/virtio_pci.o 00:04:36.349 LIB libspdk_init.a 00:04:36.607 SO libspdk_init.so.4.0 00:04:36.607 SYMLINK libspdk_init.so 00:04:36.607 LIB libspdk_accel.a 00:04:36.607 SO libspdk_accel.so.14.0 00:04:36.607 LIB libspdk_virtio.a 00:04:36.607 SO libspdk_virtio.so.6.0 00:04:36.607 SYMLINK libspdk_accel.so 00:04:36.607 CC lib/event/app.o 00:04:36.607 CC lib/event/reactor.o 00:04:36.607 CC lib/event/log_rpc.o 00:04:36.607 CC lib/event/scheduler_static.o 00:04:36.607 CC lib/event/app_rpc.o 00:04:36.864 SYMLINK libspdk_virtio.so 00:04:36.865 LIB libspdk_nvme.a 00:04:36.865 CC lib/bdev/bdev.o 00:04:36.865 CC lib/bdev/bdev_rpc.o 00:04:36.865 CC lib/bdev/bdev_zone.o 00:04:36.865 CC lib/bdev/part.o 00:04:36.865 CC lib/bdev/scsi_nvme.o 00:04:37.122 SO libspdk_nvme.so.12.0 00:04:37.122 LIB libspdk_event.a 00:04:37.122 SO libspdk_event.so.12.0 00:04:37.122 SYMLINK libspdk_nvme.so 00:04:37.379 SYMLINK libspdk_event.so 00:04:38.753 LIB libspdk_blob.a 00:04:38.753 SO libspdk_blob.so.10.1 00:04:38.753 SYMLINK libspdk_blob.so 00:04:38.753 CC lib/blobfs/tree.o 00:04:38.753 CC lib/lvol/lvol.o 00:04:38.753 CC lib/blobfs/blobfs.o 00:04:39.688 LIB libspdk_bdev.a 00:04:39.688 SO libspdk_bdev.so.14.0 00:04:39.688 SYMLINK libspdk_bdev.so 00:04:39.688 LIB libspdk_blobfs.a 00:04:39.946 LIB libspdk_lvol.a 00:04:39.946 SO libspdk_blobfs.so.9.0 00:04:39.946 SO libspdk_lvol.so.9.1 00:04:39.946 CC lib/scsi/dev.o 00:04:39.946 CC lib/nbd/nbd.o 00:04:39.946 CC lib/ublk/ublk.o 00:04:39.946 CC lib/scsi/lun.o 00:04:39.946 CC lib/nvmf/ctrlr.o 00:04:39.946 CC lib/ublk/ublk_rpc.o 00:04:39.946 CC lib/nvmf/ctrlr_discovery.o 00:04:39.946 SYMLINK libspdk_blobfs.so 00:04:39.946 CC lib/ftl/ftl_core.o 00:04:39.946 CC lib/ftl/ftl_init.o 00:04:39.946 SYMLINK libspdk_lvol.so 00:04:39.946 CC lib/ftl/ftl_layout.o 00:04:40.204 CC lib/ftl/ftl_debug.o 00:04:40.204 CC lib/ftl/ftl_io.o 00:04:40.204 CC lib/nbd/nbd_rpc.o 00:04:40.204 CC lib/scsi/port.o 00:04:40.204 CC lib/nvmf/ctrlr_bdev.o 00:04:40.204 CC lib/nvmf/subsystem.o 00:04:40.204 LIB libspdk_nbd.a 00:04:40.204 CC lib/nvmf/nvmf.o 00:04:40.462 CC lib/nvmf/nvmf_rpc.o 00:04:40.462 SO libspdk_nbd.so.6.0 00:04:40.462 CC lib/scsi/scsi.o 00:04:40.462 CC lib/ftl/ftl_sb.o 00:04:40.462 SYMLINK libspdk_nbd.so 00:04:40.462 CC lib/scsi/scsi_bdev.o 00:04:40.462 CC lib/scsi/scsi_pr.o 00:04:40.462 CC lib/scsi/scsi_rpc.o 00:04:40.462 LIB libspdk_ublk.a 00:04:40.462 SO libspdk_ublk.so.2.0 00:04:40.720 CC lib/ftl/ftl_l2p.o 00:04:40.720 SYMLINK libspdk_ublk.so 00:04:40.720 CC lib/ftl/ftl_l2p_flat.o 00:04:40.720 CC lib/ftl/ftl_nv_cache.o 00:04:40.720 CC lib/ftl/ftl_band.o 00:04:40.720 CC lib/ftl/ftl_band_ops.o 00:04:40.977 CC lib/scsi/task.o 00:04:40.977 CC lib/nvmf/transport.o 00:04:40.977 CC lib/nvmf/tcp.o 00:04:41.235 LIB libspdk_scsi.a 00:04:41.235 CC lib/ftl/ftl_writer.o 00:04:41.235 SO libspdk_scsi.so.8.0 00:04:41.235 CC lib/nvmf/rdma.o 00:04:41.235 CC lib/ftl/ftl_rq.o 00:04:41.235 CC lib/ftl/ftl_reloc.o 00:04:41.235 SYMLINK libspdk_scsi.so 00:04:41.235 CC lib/ftl/ftl_l2p_cache.o 00:04:41.493 CC lib/ftl/ftl_p2l.o 00:04:41.493 CC lib/iscsi/conn.o 00:04:41.493 CC lib/iscsi/init_grp.o 00:04:41.493 CC lib/ftl/mngt/ftl_mngt.o 00:04:41.493 CC lib/iscsi/iscsi.o 00:04:41.493 CC lib/vhost/vhost.o 00:04:41.751 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:41.751 CC lib/iscsi/md5.o 00:04:41.751 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:41.751 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:42.010 CC lib/vhost/vhost_rpc.o 00:04:42.010 CC lib/vhost/vhost_scsi.o 00:04:42.010 CC lib/iscsi/param.o 00:04:42.010 CC lib/iscsi/portal_grp.o 00:04:42.010 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:42.268 CC lib/iscsi/tgt_node.o 00:04:42.268 CC lib/iscsi/iscsi_subsystem.o 00:04:42.268 CC lib/vhost/vhost_blk.o 00:04:42.268 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:42.526 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:42.526 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:42.526 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:42.526 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:42.526 CC lib/iscsi/iscsi_rpc.o 00:04:42.784 CC lib/iscsi/task.o 00:04:42.784 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:42.784 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:42.784 CC lib/vhost/rte_vhost_user.o 00:04:42.784 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:43.042 CC lib/ftl/utils/ftl_conf.o 00:04:43.042 CC lib/ftl/utils/ftl_md.o 00:04:43.042 CC lib/ftl/utils/ftl_mempool.o 00:04:43.043 CC lib/ftl/utils/ftl_bitmap.o 00:04:43.043 LIB libspdk_iscsi.a 00:04:43.043 CC lib/ftl/utils/ftl_property.o 00:04:43.043 SO libspdk_iscsi.so.7.0 00:04:43.043 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:43.300 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:43.300 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:43.301 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:43.301 SYMLINK libspdk_iscsi.so 00:04:43.301 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:43.301 LIB libspdk_nvmf.a 00:04:43.301 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:43.301 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:43.301 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:43.301 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:43.558 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:43.558 CC lib/ftl/base/ftl_base_dev.o 00:04:43.558 SO libspdk_nvmf.so.17.0 00:04:43.558 CC lib/ftl/base/ftl_base_bdev.o 00:04:43.558 CC lib/ftl/ftl_trace.o 00:04:43.558 SYMLINK libspdk_nvmf.so 00:04:43.816 LIB libspdk_ftl.a 00:04:44.076 LIB libspdk_vhost.a 00:04:44.076 SO libspdk_ftl.so.8.0 00:04:44.076 SO libspdk_vhost.so.7.1 00:04:44.076 SYMLINK libspdk_vhost.so 00:04:44.338 SYMLINK libspdk_ftl.so 00:04:44.596 CC module/env_dpdk/env_dpdk_rpc.o 00:04:44.596 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:44.596 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:44.596 CC module/sock/uring/uring.o 00:04:44.596 CC module/sock/posix/posix.o 00:04:44.596 CC module/accel/error/accel_error.o 00:04:44.596 CC module/scheduler/gscheduler/gscheduler.o 00:04:44.596 CC module/blob/bdev/blob_bdev.o 00:04:44.596 CC module/accel/dsa/accel_dsa.o 00:04:44.596 CC module/accel/ioat/accel_ioat.o 00:04:44.596 LIB libspdk_env_dpdk_rpc.a 00:04:44.596 SO libspdk_env_dpdk_rpc.so.5.0 00:04:44.854 LIB libspdk_scheduler_dpdk_governor.a 00:04:44.854 LIB libspdk_scheduler_gscheduler.a 00:04:44.854 SYMLINK libspdk_env_dpdk_rpc.so 00:04:44.854 CC module/accel/ioat/accel_ioat_rpc.o 00:04:44.854 SO libspdk_scheduler_dpdk_governor.so.3.0 00:04:44.854 SO libspdk_scheduler_gscheduler.so.3.0 00:04:44.854 CC module/accel/error/accel_error_rpc.o 00:04:44.854 LIB libspdk_scheduler_dynamic.a 00:04:44.854 CC module/accel/dsa/accel_dsa_rpc.o 00:04:44.854 SO libspdk_scheduler_dynamic.so.3.0 00:04:44.854 SYMLINK libspdk_scheduler_gscheduler.so 00:04:44.854 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:44.854 LIB libspdk_blob_bdev.a 00:04:44.854 SYMLINK libspdk_scheduler_dynamic.so 00:04:44.854 SO libspdk_blob_bdev.so.10.1 00:04:44.854 LIB libspdk_accel_ioat.a 00:04:44.854 LIB libspdk_accel_error.a 00:04:44.854 SYMLINK libspdk_blob_bdev.so 00:04:44.854 LIB libspdk_accel_dsa.a 00:04:44.854 CC module/accel/iaa/accel_iaa.o 00:04:44.854 CC module/accel/iaa/accel_iaa_rpc.o 00:04:44.854 SO libspdk_accel_ioat.so.5.0 00:04:45.112 SO libspdk_accel_error.so.1.0 00:04:45.112 SO libspdk_accel_dsa.so.4.0 00:04:45.112 SYMLINK libspdk_accel_ioat.so 00:04:45.112 SYMLINK libspdk_accel_error.so 00:04:45.112 SYMLINK libspdk_accel_dsa.so 00:04:45.112 CC module/blobfs/bdev/blobfs_bdev.o 00:04:45.112 CC module/bdev/delay/vbdev_delay.o 00:04:45.112 CC module/bdev/error/vbdev_error.o 00:04:45.112 CC module/bdev/gpt/gpt.o 00:04:45.112 CC module/bdev/lvol/vbdev_lvol.o 00:04:45.112 CC module/bdev/malloc/bdev_malloc.o 00:04:45.112 LIB libspdk_accel_iaa.a 00:04:45.112 SO libspdk_accel_iaa.so.2.0 00:04:45.112 CC module/bdev/null/bdev_null.o 00:04:45.370 SYMLINK libspdk_accel_iaa.so 00:04:45.370 CC module/bdev/null/bdev_null_rpc.o 00:04:45.370 LIB libspdk_sock_uring.a 00:04:45.370 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:45.370 LIB libspdk_sock_posix.a 00:04:45.370 SO libspdk_sock_uring.so.4.0 00:04:45.370 CC module/bdev/gpt/vbdev_gpt.o 00:04:45.370 SO libspdk_sock_posix.so.5.0 00:04:45.370 SYMLINK libspdk_sock_uring.so 00:04:45.370 CC module/bdev/error/vbdev_error_rpc.o 00:04:45.370 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:45.370 SYMLINK libspdk_sock_posix.so 00:04:45.370 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:45.629 LIB libspdk_bdev_null.a 00:04:45.629 LIB libspdk_blobfs_bdev.a 00:04:45.629 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:45.629 SO libspdk_bdev_null.so.5.0 00:04:45.629 SO libspdk_blobfs_bdev.so.5.0 00:04:45.629 CC module/bdev/nvme/bdev_nvme.o 00:04:45.629 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:45.629 LIB libspdk_bdev_error.a 00:04:45.629 SYMLINK libspdk_bdev_null.so 00:04:45.629 SO libspdk_bdev_error.so.5.0 00:04:45.629 LIB libspdk_bdev_delay.a 00:04:45.629 SYMLINK libspdk_blobfs_bdev.so 00:04:45.629 CC module/bdev/nvme/nvme_rpc.o 00:04:45.629 SO libspdk_bdev_delay.so.5.0 00:04:45.629 LIB libspdk_bdev_gpt.a 00:04:45.629 SYMLINK libspdk_bdev_error.so 00:04:45.629 SO libspdk_bdev_gpt.so.5.0 00:04:45.629 LIB libspdk_bdev_malloc.a 00:04:45.629 SYMLINK libspdk_bdev_delay.so 00:04:45.629 CC module/bdev/passthru/vbdev_passthru.o 00:04:45.887 SO libspdk_bdev_malloc.so.5.0 00:04:45.887 SYMLINK libspdk_bdev_gpt.so 00:04:45.887 CC module/bdev/raid/bdev_raid.o 00:04:45.887 SYMLINK libspdk_bdev_malloc.so 00:04:45.887 LIB libspdk_bdev_lvol.a 00:04:45.887 CC module/bdev/raid/bdev_raid_rpc.o 00:04:45.887 CC module/bdev/split/vbdev_split.o 00:04:45.887 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:45.887 SO libspdk_bdev_lvol.so.5.0 00:04:45.887 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:45.887 CC module/bdev/uring/bdev_uring.o 00:04:45.887 SYMLINK libspdk_bdev_lvol.so 00:04:45.887 CC module/bdev/raid/bdev_raid_sb.o 00:04:46.145 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:46.145 CC module/bdev/nvme/bdev_mdns_client.o 00:04:46.145 CC module/bdev/split/vbdev_split_rpc.o 00:04:46.145 CC module/bdev/nvme/vbdev_opal.o 00:04:46.145 CC module/bdev/uring/bdev_uring_rpc.o 00:04:46.145 LIB libspdk_bdev_passthru.a 00:04:46.145 LIB libspdk_bdev_zone_block.a 00:04:46.145 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:46.403 SO libspdk_bdev_passthru.so.5.0 00:04:46.403 LIB libspdk_bdev_split.a 00:04:46.403 SO libspdk_bdev_zone_block.so.5.0 00:04:46.403 SO libspdk_bdev_split.so.5.0 00:04:46.403 SYMLINK libspdk_bdev_passthru.so 00:04:46.403 CC module/bdev/raid/raid0.o 00:04:46.403 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:46.403 SYMLINK libspdk_bdev_zone_block.so 00:04:46.403 SYMLINK libspdk_bdev_split.so 00:04:46.403 CC module/bdev/aio/bdev_aio.o 00:04:46.403 LIB libspdk_bdev_uring.a 00:04:46.403 SO libspdk_bdev_uring.so.5.0 00:04:46.403 CC module/bdev/ftl/bdev_ftl.o 00:04:46.403 CC module/bdev/raid/raid1.o 00:04:46.403 CC module/bdev/iscsi/bdev_iscsi.o 00:04:46.403 SYMLINK libspdk_bdev_uring.so 00:04:46.403 CC module/bdev/raid/concat.o 00:04:46.403 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:46.662 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:46.662 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:46.662 CC module/bdev/aio/bdev_aio_rpc.o 00:04:46.662 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:46.662 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:46.662 LIB libspdk_bdev_raid.a 00:04:46.920 SO libspdk_bdev_raid.so.5.0 00:04:46.920 LIB libspdk_bdev_aio.a 00:04:46.920 SYMLINK libspdk_bdev_raid.so 00:04:46.920 LIB libspdk_bdev_iscsi.a 00:04:46.920 SO libspdk_bdev_aio.so.5.0 00:04:46.920 SO libspdk_bdev_iscsi.so.5.0 00:04:46.920 LIB libspdk_bdev_ftl.a 00:04:46.920 SYMLINK libspdk_bdev_aio.so 00:04:46.920 SO libspdk_bdev_ftl.so.5.0 00:04:46.920 SYMLINK libspdk_bdev_iscsi.so 00:04:47.178 LIB libspdk_bdev_virtio.a 00:04:47.178 SYMLINK libspdk_bdev_ftl.so 00:04:47.178 SO libspdk_bdev_virtio.so.5.0 00:04:47.178 SYMLINK libspdk_bdev_virtio.so 00:04:47.745 LIB libspdk_bdev_nvme.a 00:04:48.003 SO libspdk_bdev_nvme.so.6.0 00:04:48.003 SYMLINK libspdk_bdev_nvme.so 00:04:48.261 CC module/event/subsystems/iobuf/iobuf.o 00:04:48.261 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:48.261 CC module/event/subsystems/sock/sock.o 00:04:48.261 CC module/event/subsystems/vmd/vmd.o 00:04:48.261 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:48.261 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:48.261 CC module/event/subsystems/scheduler/scheduler.o 00:04:48.520 LIB libspdk_event_sock.a 00:04:48.520 LIB libspdk_event_scheduler.a 00:04:48.520 LIB libspdk_event_iobuf.a 00:04:48.520 LIB libspdk_event_vhost_blk.a 00:04:48.520 LIB libspdk_event_vmd.a 00:04:48.520 SO libspdk_event_sock.so.4.0 00:04:48.520 SO libspdk_event_scheduler.so.3.0 00:04:48.520 SO libspdk_event_vhost_blk.so.2.0 00:04:48.520 SO libspdk_event_iobuf.so.2.0 00:04:48.520 SO libspdk_event_vmd.so.5.0 00:04:48.520 SYMLINK libspdk_event_sock.so 00:04:48.520 SYMLINK libspdk_event_iobuf.so 00:04:48.778 SYMLINK libspdk_event_scheduler.so 00:04:48.778 SYMLINK libspdk_event_vhost_blk.so 00:04:48.778 SYMLINK libspdk_event_vmd.so 00:04:48.778 CC module/event/subsystems/accel/accel.o 00:04:49.037 LIB libspdk_event_accel.a 00:04:49.037 SO libspdk_event_accel.so.5.0 00:04:49.037 SYMLINK libspdk_event_accel.so 00:04:49.295 CC module/event/subsystems/bdev/bdev.o 00:04:49.554 LIB libspdk_event_bdev.a 00:04:49.554 SO libspdk_event_bdev.so.5.0 00:04:49.554 SYMLINK libspdk_event_bdev.so 00:04:49.812 CC module/event/subsystems/scsi/scsi.o 00:04:49.812 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:49.812 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:49.812 CC module/event/subsystems/nbd/nbd.o 00:04:49.812 CC module/event/subsystems/ublk/ublk.o 00:04:49.812 LIB libspdk_event_nbd.a 00:04:49.812 LIB libspdk_event_ublk.a 00:04:49.812 LIB libspdk_event_scsi.a 00:04:49.812 SO libspdk_event_nbd.so.5.0 00:04:49.812 SO libspdk_event_ublk.so.2.0 00:04:49.812 SO libspdk_event_scsi.so.5.0 00:04:50.070 SYMLINK libspdk_event_nbd.so 00:04:50.070 LIB libspdk_event_nvmf.a 00:04:50.070 SYMLINK libspdk_event_ublk.so 00:04:50.070 SYMLINK libspdk_event_scsi.so 00:04:50.070 SO libspdk_event_nvmf.so.5.0 00:04:50.070 SYMLINK libspdk_event_nvmf.so 00:04:50.070 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:50.070 CC module/event/subsystems/iscsi/iscsi.o 00:04:50.329 LIB libspdk_event_vhost_scsi.a 00:04:50.329 SO libspdk_event_vhost_scsi.so.2.0 00:04:50.329 LIB libspdk_event_iscsi.a 00:04:50.329 SO libspdk_event_iscsi.so.5.0 00:04:50.329 SYMLINK libspdk_event_vhost_scsi.so 00:04:50.329 SYMLINK libspdk_event_iscsi.so 00:04:50.587 SO libspdk.so.5.0 00:04:50.587 SYMLINK libspdk.so 00:04:50.845 CC app/trace_record/trace_record.o 00:04:50.845 CXX app/trace/trace.o 00:04:50.845 CC app/nvmf_tgt/nvmf_main.o 00:04:50.845 CC app/iscsi_tgt/iscsi_tgt.o 00:04:50.845 CC examples/nvme/hello_world/hello_world.o 00:04:50.845 CC examples/ioat/perf/perf.o 00:04:50.845 CC examples/accel/perf/accel_perf.o 00:04:50.845 CC examples/blob/hello_world/hello_blob.o 00:04:50.845 CC test/accel/dif/dif.o 00:04:50.845 CC examples/bdev/hello_world/hello_bdev.o 00:04:51.102 LINK nvmf_tgt 00:04:51.102 LINK iscsi_tgt 00:04:51.102 LINK spdk_trace_record 00:04:51.102 LINK ioat_perf 00:04:51.102 LINK hello_world 00:04:51.102 LINK hello_bdev 00:04:51.102 LINK hello_blob 00:04:51.102 LINK spdk_trace 00:04:51.359 LINK dif 00:04:51.359 CC examples/ioat/verify/verify.o 00:04:51.359 LINK accel_perf 00:04:51.359 CC examples/bdev/bdevperf/bdevperf.o 00:04:51.359 CC examples/nvme/reconnect/reconnect.o 00:04:51.359 CC app/spdk_tgt/spdk_tgt.o 00:04:51.359 CC test/app/bdev_svc/bdev_svc.o 00:04:51.359 CC examples/blob/cli/blobcli.o 00:04:51.359 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:51.359 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:51.617 LINK verify 00:04:51.617 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:51.617 LINK spdk_tgt 00:04:51.617 CC app/spdk_lspci/spdk_lspci.o 00:04:51.617 LINK bdev_svc 00:04:51.617 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:51.617 LINK reconnect 00:04:51.617 LINK spdk_lspci 00:04:51.876 CC examples/sock/hello_world/hello_sock.o 00:04:51.876 LINK nvme_fuzz 00:04:51.876 CC examples/vmd/lsvmd/lsvmd.o 00:04:51.876 CC examples/nvmf/nvmf/nvmf.o 00:04:51.876 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:51.876 LINK blobcli 00:04:51.876 CC app/spdk_nvme_perf/perf.o 00:04:52.134 LINK hello_sock 00:04:52.134 LINK lsvmd 00:04:52.134 LINK vhost_fuzz 00:04:52.134 CC test/app/histogram_perf/histogram_perf.o 00:04:52.134 LINK bdevperf 00:04:52.134 CC test/app/jsoncat/jsoncat.o 00:04:52.391 LINK nvmf 00:04:52.391 CC examples/vmd/led/led.o 00:04:52.391 CC app/spdk_nvme_identify/identify.o 00:04:52.391 LINK histogram_perf 00:04:52.391 CC examples/util/zipf/zipf.o 00:04:52.391 LINK jsoncat 00:04:52.391 LINK led 00:04:52.391 LINK nvme_manage 00:04:52.391 CC test/bdev/bdevio/bdevio.o 00:04:52.649 LINK zipf 00:04:52.649 CC examples/nvme/arbitration/arbitration.o 00:04:52.649 CC examples/nvme/hotplug/hotplug.o 00:04:52.649 CC examples/thread/thread/thread_ex.o 00:04:52.649 CC test/app/stub/stub.o 00:04:52.907 CC test/blobfs/mkfs/mkfs.o 00:04:52.907 CC examples/idxd/perf/perf.o 00:04:52.907 LINK spdk_nvme_perf 00:04:52.907 LINK stub 00:04:52.907 LINK bdevio 00:04:52.907 LINK hotplug 00:04:52.907 LINK arbitration 00:04:52.907 LINK thread 00:04:52.907 LINK mkfs 00:04:53.165 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:53.165 LINK spdk_nvme_identify 00:04:53.165 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:53.165 LINK iscsi_fuzz 00:04:53.165 CC examples/nvme/abort/abort.o 00:04:53.165 TEST_HEADER include/spdk/accel.h 00:04:53.165 TEST_HEADER include/spdk/accel_module.h 00:04:53.165 TEST_HEADER include/spdk/assert.h 00:04:53.165 TEST_HEADER include/spdk/barrier.h 00:04:53.165 TEST_HEADER include/spdk/base64.h 00:04:53.165 TEST_HEADER include/spdk/bdev.h 00:04:53.165 TEST_HEADER include/spdk/bdev_module.h 00:04:53.165 TEST_HEADER include/spdk/bdev_zone.h 00:04:53.165 TEST_HEADER include/spdk/bit_array.h 00:04:53.165 TEST_HEADER include/spdk/bit_pool.h 00:04:53.165 TEST_HEADER include/spdk/blob_bdev.h 00:04:53.165 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:53.165 TEST_HEADER include/spdk/blobfs.h 00:04:53.165 TEST_HEADER include/spdk/blob.h 00:04:53.165 TEST_HEADER include/spdk/conf.h 00:04:53.165 TEST_HEADER include/spdk/config.h 00:04:53.165 TEST_HEADER include/spdk/cpuset.h 00:04:53.165 TEST_HEADER include/spdk/crc16.h 00:04:53.165 TEST_HEADER include/spdk/crc32.h 00:04:53.165 TEST_HEADER include/spdk/crc64.h 00:04:53.165 TEST_HEADER include/spdk/dif.h 00:04:53.165 TEST_HEADER include/spdk/dma.h 00:04:53.166 TEST_HEADER include/spdk/endian.h 00:04:53.166 TEST_HEADER include/spdk/env_dpdk.h 00:04:53.166 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:53.166 TEST_HEADER include/spdk/env.h 00:04:53.166 TEST_HEADER include/spdk/event.h 00:04:53.166 TEST_HEADER include/spdk/fd_group.h 00:04:53.166 TEST_HEADER include/spdk/fd.h 00:04:53.166 TEST_HEADER include/spdk/file.h 00:04:53.166 TEST_HEADER include/spdk/ftl.h 00:04:53.166 TEST_HEADER include/spdk/gpt_spec.h 00:04:53.166 TEST_HEADER include/spdk/hexlify.h 00:04:53.166 TEST_HEADER include/spdk/histogram_data.h 00:04:53.166 TEST_HEADER include/spdk/idxd.h 00:04:53.166 TEST_HEADER include/spdk/idxd_spec.h 00:04:53.166 TEST_HEADER include/spdk/init.h 00:04:53.166 LINK idxd_perf 00:04:53.166 TEST_HEADER include/spdk/ioat.h 00:04:53.166 TEST_HEADER include/spdk/ioat_spec.h 00:04:53.166 TEST_HEADER include/spdk/iscsi_spec.h 00:04:53.166 TEST_HEADER include/spdk/json.h 00:04:53.166 TEST_HEADER include/spdk/jsonrpc.h 00:04:53.166 TEST_HEADER include/spdk/likely.h 00:04:53.166 TEST_HEADER include/spdk/log.h 00:04:53.166 TEST_HEADER include/spdk/lvol.h 00:04:53.166 TEST_HEADER include/spdk/memory.h 00:04:53.166 TEST_HEADER include/spdk/mmio.h 00:04:53.424 TEST_HEADER include/spdk/nbd.h 00:04:53.424 TEST_HEADER include/spdk/notify.h 00:04:53.424 TEST_HEADER include/spdk/nvme.h 00:04:53.424 TEST_HEADER include/spdk/nvme_intel.h 00:04:53.424 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:53.424 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:53.424 TEST_HEADER include/spdk/nvme_spec.h 00:04:53.424 TEST_HEADER include/spdk/nvme_zns.h 00:04:53.424 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:53.424 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:53.424 TEST_HEADER include/spdk/nvmf.h 00:04:53.424 TEST_HEADER include/spdk/nvmf_spec.h 00:04:53.424 TEST_HEADER include/spdk/nvmf_transport.h 00:04:53.424 TEST_HEADER include/spdk/opal.h 00:04:53.424 LINK cmb_copy 00:04:53.424 TEST_HEADER include/spdk/opal_spec.h 00:04:53.424 TEST_HEADER include/spdk/pci_ids.h 00:04:53.424 TEST_HEADER include/spdk/pipe.h 00:04:53.424 CC test/dma/test_dma/test_dma.o 00:04:53.424 TEST_HEADER include/spdk/queue.h 00:04:53.424 TEST_HEADER include/spdk/reduce.h 00:04:53.424 TEST_HEADER include/spdk/rpc.h 00:04:53.424 TEST_HEADER include/spdk/scheduler.h 00:04:53.424 TEST_HEADER include/spdk/scsi.h 00:04:53.424 LINK interrupt_tgt 00:04:53.424 TEST_HEADER include/spdk/scsi_spec.h 00:04:53.424 TEST_HEADER include/spdk/sock.h 00:04:53.424 TEST_HEADER include/spdk/stdinc.h 00:04:53.424 TEST_HEADER include/spdk/string.h 00:04:53.424 TEST_HEADER include/spdk/thread.h 00:04:53.424 TEST_HEADER include/spdk/trace.h 00:04:53.424 TEST_HEADER include/spdk/trace_parser.h 00:04:53.424 TEST_HEADER include/spdk/tree.h 00:04:53.424 CC app/spdk_nvme_discover/discovery_aer.o 00:04:53.424 TEST_HEADER include/spdk/ublk.h 00:04:53.424 TEST_HEADER include/spdk/util.h 00:04:53.424 TEST_HEADER include/spdk/uuid.h 00:04:53.424 TEST_HEADER include/spdk/version.h 00:04:53.424 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:53.424 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:53.424 TEST_HEADER include/spdk/vhost.h 00:04:53.424 TEST_HEADER include/spdk/vmd.h 00:04:53.424 TEST_HEADER include/spdk/xor.h 00:04:53.424 TEST_HEADER include/spdk/zipf.h 00:04:53.424 CXX test/cpp_headers/accel.o 00:04:53.424 CC test/env/mem_callbacks/mem_callbacks.o 00:04:53.424 LINK pmr_persistence 00:04:53.424 CC test/event/event_perf/event_perf.o 00:04:53.682 CXX test/cpp_headers/accel_module.o 00:04:53.682 CC test/event/reactor/reactor.o 00:04:53.682 LINK spdk_nvme_discover 00:04:53.682 LINK abort 00:04:53.682 CXX test/cpp_headers/assert.o 00:04:53.682 CC test/nvme/aer/aer.o 00:04:53.682 CC test/lvol/esnap/esnap.o 00:04:53.682 LINK event_perf 00:04:53.682 LINK reactor 00:04:53.682 LINK test_dma 00:04:53.682 CC app/spdk_top/spdk_top.o 00:04:53.940 CXX test/cpp_headers/barrier.o 00:04:53.940 CXX test/cpp_headers/base64.o 00:04:53.940 CC app/spdk_dd/spdk_dd.o 00:04:53.940 CC app/vhost/vhost.o 00:04:53.940 CC test/event/reactor_perf/reactor_perf.o 00:04:53.940 LINK aer 00:04:53.940 CXX test/cpp_headers/bdev.o 00:04:53.940 LINK mem_callbacks 00:04:53.940 CC test/event/app_repeat/app_repeat.o 00:04:53.940 LINK vhost 00:04:54.209 LINK reactor_perf 00:04:54.209 CC test/event/scheduler/scheduler.o 00:04:54.209 CC test/nvme/reset/reset.o 00:04:54.209 LINK app_repeat 00:04:54.209 CC test/env/vtophys/vtophys.o 00:04:54.209 CXX test/cpp_headers/bdev_module.o 00:04:54.209 LINK spdk_dd 00:04:54.209 CC test/nvme/sgl/sgl.o 00:04:54.482 LINK scheduler 00:04:54.482 CC app/fio/nvme/fio_plugin.o 00:04:54.482 LINK vtophys 00:04:54.482 CXX test/cpp_headers/bdev_zone.o 00:04:54.482 CXX test/cpp_headers/bit_array.o 00:04:54.482 LINK reset 00:04:54.482 CC app/fio/bdev/fio_plugin.o 00:04:54.482 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:54.482 CC test/env/memory/memory_ut.o 00:04:54.482 LINK sgl 00:04:54.740 CXX test/cpp_headers/bit_pool.o 00:04:54.740 LINK spdk_top 00:04:54.740 CC test/nvme/e2edp/nvme_dp.o 00:04:54.740 CC test/rpc_client/rpc_client_test.o 00:04:54.740 LINK env_dpdk_post_init 00:04:54.740 CXX test/cpp_headers/blob_bdev.o 00:04:54.740 CXX test/cpp_headers/blobfs_bdev.o 00:04:54.998 LINK rpc_client_test 00:04:54.998 LINK spdk_nvme 00:04:54.998 CC test/thread/poller_perf/poller_perf.o 00:04:54.998 LINK spdk_bdev 00:04:54.998 CXX test/cpp_headers/blobfs.o 00:04:54.998 LINK nvme_dp 00:04:54.998 CC test/env/pci/pci_ut.o 00:04:54.998 CXX test/cpp_headers/blob.o 00:04:54.998 CC test/nvme/overhead/overhead.o 00:04:54.998 LINK poller_perf 00:04:54.998 CC test/nvme/err_injection/err_injection.o 00:04:55.257 CC test/nvme/startup/startup.o 00:04:55.257 CXX test/cpp_headers/conf.o 00:04:55.257 CC test/nvme/reserve/reserve.o 00:04:55.257 CC test/nvme/simple_copy/simple_copy.o 00:04:55.257 CXX test/cpp_headers/config.o 00:04:55.257 CXX test/cpp_headers/cpuset.o 00:04:55.257 LINK err_injection 00:04:55.257 LINK startup 00:04:55.257 LINK overhead 00:04:55.257 LINK pci_ut 00:04:55.515 CC test/nvme/connect_stress/connect_stress.o 00:04:55.515 LINK reserve 00:04:55.515 CXX test/cpp_headers/crc16.o 00:04:55.515 LINK simple_copy 00:04:55.515 CXX test/cpp_headers/crc32.o 00:04:55.515 LINK memory_ut 00:04:55.515 CC test/nvme/boot_partition/boot_partition.o 00:04:55.515 CC test/nvme/compliance/nvme_compliance.o 00:04:55.515 LINK connect_stress 00:04:55.773 CXX test/cpp_headers/crc64.o 00:04:55.773 CC test/nvme/fused_ordering/fused_ordering.o 00:04:55.773 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:55.773 CC test/nvme/fdp/fdp.o 00:04:55.773 CC test/nvme/cuse/cuse.o 00:04:55.773 CXX test/cpp_headers/dif.o 00:04:55.773 CXX test/cpp_headers/dma.o 00:04:55.773 LINK boot_partition 00:04:55.773 CXX test/cpp_headers/endian.o 00:04:55.773 LINK doorbell_aers 00:04:55.773 CXX test/cpp_headers/env_dpdk.o 00:04:55.773 LINK fused_ordering 00:04:56.031 CXX test/cpp_headers/env.o 00:04:56.031 LINK nvme_compliance 00:04:56.031 CXX test/cpp_headers/event.o 00:04:56.031 CXX test/cpp_headers/fd_group.o 00:04:56.031 LINK fdp 00:04:56.031 CXX test/cpp_headers/fd.o 00:04:56.031 CXX test/cpp_headers/file.o 00:04:56.031 CXX test/cpp_headers/ftl.o 00:04:56.031 CXX test/cpp_headers/gpt_spec.o 00:04:56.031 CXX test/cpp_headers/hexlify.o 00:04:56.031 CXX test/cpp_headers/histogram_data.o 00:04:56.289 CXX test/cpp_headers/idxd.o 00:04:56.289 CXX test/cpp_headers/idxd_spec.o 00:04:56.289 CXX test/cpp_headers/init.o 00:04:56.289 CXX test/cpp_headers/ioat.o 00:04:56.289 CXX test/cpp_headers/ioat_spec.o 00:04:56.289 CXX test/cpp_headers/iscsi_spec.o 00:04:56.289 CXX test/cpp_headers/json.o 00:04:56.289 CXX test/cpp_headers/jsonrpc.o 00:04:56.289 CXX test/cpp_headers/likely.o 00:04:56.289 CXX test/cpp_headers/log.o 00:04:56.289 CXX test/cpp_headers/lvol.o 00:04:56.289 CXX test/cpp_headers/memory.o 00:04:56.548 CXX test/cpp_headers/mmio.o 00:04:56.548 CXX test/cpp_headers/nbd.o 00:04:56.548 CXX test/cpp_headers/notify.o 00:04:56.548 CXX test/cpp_headers/nvme.o 00:04:56.548 CXX test/cpp_headers/nvme_intel.o 00:04:56.548 CXX test/cpp_headers/nvme_ocssd.o 00:04:56.548 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:56.548 CXX test/cpp_headers/nvme_spec.o 00:04:56.548 CXX test/cpp_headers/nvme_zns.o 00:04:56.548 CXX test/cpp_headers/nvmf_cmd.o 00:04:56.548 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:56.548 CXX test/cpp_headers/nvmf.o 00:04:56.548 CXX test/cpp_headers/nvmf_spec.o 00:04:56.806 CXX test/cpp_headers/nvmf_transport.o 00:04:56.806 CXX test/cpp_headers/opal.o 00:04:56.806 CXX test/cpp_headers/opal_spec.o 00:04:56.806 CXX test/cpp_headers/pci_ids.o 00:04:56.806 LINK cuse 00:04:56.806 CXX test/cpp_headers/pipe.o 00:04:56.806 CXX test/cpp_headers/queue.o 00:04:56.807 CXX test/cpp_headers/reduce.o 00:04:56.807 CXX test/cpp_headers/rpc.o 00:04:56.807 CXX test/cpp_headers/scheduler.o 00:04:56.807 CXX test/cpp_headers/scsi.o 00:04:56.807 CXX test/cpp_headers/scsi_spec.o 00:04:57.065 CXX test/cpp_headers/sock.o 00:04:57.065 CXX test/cpp_headers/stdinc.o 00:04:57.065 CXX test/cpp_headers/string.o 00:04:57.065 CXX test/cpp_headers/thread.o 00:04:57.065 CXX test/cpp_headers/trace.o 00:04:57.065 CXX test/cpp_headers/trace_parser.o 00:04:57.065 CXX test/cpp_headers/tree.o 00:04:57.065 CXX test/cpp_headers/ublk.o 00:04:57.065 CXX test/cpp_headers/util.o 00:04:57.065 CXX test/cpp_headers/uuid.o 00:04:57.065 CXX test/cpp_headers/version.o 00:04:57.065 CXX test/cpp_headers/vfio_user_pci.o 00:04:57.065 CXX test/cpp_headers/vfio_user_spec.o 00:04:57.065 CXX test/cpp_headers/vhost.o 00:04:57.065 CXX test/cpp_headers/vmd.o 00:04:57.324 CXX test/cpp_headers/xor.o 00:04:57.324 CXX test/cpp_headers/zipf.o 00:04:58.259 LINK esnap 00:04:58.519 00:04:58.519 real 0m52.407s 00:04:58.519 user 5m1.469s 00:04:58.519 sys 0m57.465s 00:04:58.519 06:44:02 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:04:58.519 06:44:02 -- common/autotest_common.sh@10 -- $ set +x 00:04:58.519 ************************************ 00:04:58.519 END TEST make 00:04:58.519 ************************************ 00:04:58.519 06:44:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:58.519 06:44:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:58.519 06:44:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:58.519 06:44:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:58.519 06:44:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:58.519 06:44:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:58.519 06:44:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:58.519 06:44:02 -- scripts/common.sh@335 -- # IFS=.-: 00:04:58.519 06:44:02 -- scripts/common.sh@335 -- # read -ra ver1 00:04:58.519 06:44:02 -- scripts/common.sh@336 -- # IFS=.-: 00:04:58.519 06:44:02 -- scripts/common.sh@336 -- # read -ra ver2 00:04:58.519 06:44:02 -- scripts/common.sh@337 -- # local 'op=<' 00:04:58.519 06:44:02 -- scripts/common.sh@339 -- # ver1_l=2 00:04:58.519 06:44:02 -- scripts/common.sh@340 -- # ver2_l=1 00:04:58.519 06:44:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:58.519 06:44:02 -- scripts/common.sh@343 -- # case "$op" in 00:04:58.519 06:44:02 -- scripts/common.sh@344 -- # : 1 00:04:58.519 06:44:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:58.519 06:44:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:58.519 06:44:03 -- scripts/common.sh@364 -- # decimal 1 00:04:58.519 06:44:03 -- scripts/common.sh@352 -- # local d=1 00:04:58.519 06:44:03 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:58.519 06:44:03 -- scripts/common.sh@354 -- # echo 1 00:04:58.519 06:44:03 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:58.519 06:44:03 -- scripts/common.sh@365 -- # decimal 2 00:04:58.519 06:44:03 -- scripts/common.sh@352 -- # local d=2 00:04:58.519 06:44:03 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:58.519 06:44:03 -- scripts/common.sh@354 -- # echo 2 00:04:58.519 06:44:03 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:58.519 06:44:03 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:58.519 06:44:03 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:58.519 06:44:03 -- scripts/common.sh@367 -- # return 0 00:04:58.519 06:44:03 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:58.519 06:44:03 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:58.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.519 --rc genhtml_branch_coverage=1 00:04:58.519 --rc genhtml_function_coverage=1 00:04:58.519 --rc genhtml_legend=1 00:04:58.519 --rc geninfo_all_blocks=1 00:04:58.519 --rc geninfo_unexecuted_blocks=1 00:04:58.519 00:04:58.519 ' 00:04:58.519 06:44:03 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:58.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.519 --rc genhtml_branch_coverage=1 00:04:58.519 --rc genhtml_function_coverage=1 00:04:58.519 --rc genhtml_legend=1 00:04:58.519 --rc geninfo_all_blocks=1 00:04:58.519 --rc geninfo_unexecuted_blocks=1 00:04:58.519 00:04:58.519 ' 00:04:58.519 06:44:03 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:58.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.519 --rc genhtml_branch_coverage=1 00:04:58.519 --rc genhtml_function_coverage=1 00:04:58.519 --rc genhtml_legend=1 00:04:58.519 --rc geninfo_all_blocks=1 00:04:58.519 --rc geninfo_unexecuted_blocks=1 00:04:58.519 00:04:58.519 ' 00:04:58.519 06:44:03 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:58.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.519 --rc genhtml_branch_coverage=1 00:04:58.519 --rc genhtml_function_coverage=1 00:04:58.519 --rc genhtml_legend=1 00:04:58.519 --rc geninfo_all_blocks=1 00:04:58.519 --rc geninfo_unexecuted_blocks=1 00:04:58.519 00:04:58.519 ' 00:04:58.519 06:44:03 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:58.519 06:44:03 -- nvmf/common.sh@7 -- # uname -s 00:04:58.519 06:44:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:58.519 06:44:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:58.519 06:44:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:58.519 06:44:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:58.519 06:44:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:58.519 06:44:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:58.519 06:44:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:58.519 06:44:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:58.519 06:44:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:58.519 06:44:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:58.519 06:44:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:657f0c9c-3891-4064-9841-3d87a573b6e7 00:04:58.519 06:44:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=657f0c9c-3891-4064-9841-3d87a573b6e7 00:04:58.519 06:44:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:58.519 06:44:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:58.519 06:44:03 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:04:58.519 06:44:03 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:58.780 06:44:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:58.780 06:44:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:58.780 06:44:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:58.780 06:44:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.780 06:44:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.780 06:44:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.780 06:44:03 -- paths/export.sh@5 -- # export PATH 00:04:58.780 06:44:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.780 06:44:03 -- nvmf/common.sh@46 -- # : 0 00:04:58.780 06:44:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:58.780 06:44:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:58.780 06:44:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:58.780 06:44:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:58.780 06:44:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:58.780 06:44:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:58.780 06:44:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:58.780 06:44:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:58.780 06:44:03 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:58.780 06:44:03 -- spdk/autotest.sh@32 -- # uname -s 00:04:58.780 06:44:03 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:58.780 06:44:03 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:58.780 06:44:03 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:58.780 06:44:03 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:58.780 06:44:03 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:58.780 06:44:03 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:58.780 06:44:03 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:58.780 06:44:03 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:58.780 06:44:03 -- spdk/autotest.sh@48 -- # udevadm_pid=60106 00:04:58.780 06:44:03 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:04:58.780 06:44:03 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:58.780 06:44:03 -- spdk/autotest.sh@54 -- # echo 60113 00:04:58.780 06:44:03 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:58.780 06:44:03 -- spdk/autotest.sh@56 -- # echo 60114 00:04:58.780 06:44:03 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:58.780 06:44:03 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:04:58.780 06:44:03 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:58.780 06:44:03 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:04:58.780 06:44:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:58.780 06:44:03 -- common/autotest_common.sh@10 -- # set +x 00:04:58.780 06:44:03 -- spdk/autotest.sh@70 -- # create_test_list 00:04:58.780 06:44:03 -- common/autotest_common.sh@746 -- # xtrace_disable 00:04:58.780 06:44:03 -- common/autotest_common.sh@10 -- # set +x 00:04:58.780 06:44:03 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:58.780 06:44:03 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:58.780 06:44:03 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:04:58.780 06:44:03 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:58.780 06:44:03 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:04:58.780 06:44:03 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:04:58.780 06:44:03 -- common/autotest_common.sh@1450 -- # uname 00:04:58.780 06:44:03 -- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']' 00:04:58.780 06:44:03 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:04:58.780 06:44:03 -- common/autotest_common.sh@1470 -- # uname 00:04:58.780 06:44:03 -- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]] 00:04:58.780 06:44:03 -- spdk/autotest.sh@79 -- # [[ y == y ]] 00:04:58.780 06:44:03 -- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:58.780 lcov: LCOV version 1.15 00:04:58.780 06:44:03 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:06.899 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:05:06.899 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:05:06.899 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:05:06.899 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:05:06.899 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:05:06.899 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:05:24.988 06:44:28 -- spdk/autotest.sh@87 -- # timing_enter pre_cleanup 00:05:24.988 06:44:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:24.988 06:44:28 -- common/autotest_common.sh@10 -- # set +x 00:05:24.988 06:44:28 -- spdk/autotest.sh@89 -- # rm -f 00:05:24.988 06:44:28 -- spdk/autotest.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:24.988 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:24.988 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:05:25.247 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:05:25.247 06:44:29 -- spdk/autotest.sh@94 -- # get_zoned_devs 00:05:25.247 06:44:29 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:05:25.247 06:44:29 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:05:25.247 06:44:29 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:05:25.247 06:44:29 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:25.247 06:44:29 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:05:25.247 06:44:29 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:05:25.247 06:44:29 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:25.247 06:44:29 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:25.247 06:44:29 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:25.247 06:44:29 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:05:25.247 06:44:29 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:05:25.247 06:44:29 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:25.247 06:44:29 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:25.247 06:44:29 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:25.247 06:44:29 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:05:25.247 06:44:29 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:05:25.247 06:44:29 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:25.247 06:44:29 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:25.247 06:44:29 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:25.247 06:44:29 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:05:25.247 06:44:29 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:05:25.247 06:44:29 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:25.247 06:44:29 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:25.247 06:44:29 -- spdk/autotest.sh@96 -- # (( 0 > 0 )) 00:05:25.247 06:44:29 -- spdk/autotest.sh@108 -- # grep -v p 00:05:25.247 06:44:29 -- spdk/autotest.sh@108 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme1n2 /dev/nvme1n3 00:05:25.247 06:44:29 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:25.247 06:44:29 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:05:25.247 06:44:29 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1 00:05:25.247 06:44:29 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:05:25.247 06:44:29 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:25.247 No valid GPT data, bailing 00:05:25.247 06:44:29 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:25.247 06:44:29 -- scripts/common.sh@393 -- # pt= 00:05:25.247 06:44:29 -- scripts/common.sh@394 -- # return 1 00:05:25.247 06:44:29 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:25.247 1+0 records in 00:05:25.247 1+0 records out 00:05:25.247 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00484323 s, 217 MB/s 00:05:25.247 06:44:29 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:25.247 06:44:29 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:05:25.247 06:44:29 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n1 00:05:25.247 06:44:29 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:05:25.247 06:44:29 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:25.247 No valid GPT data, bailing 00:05:25.247 06:44:29 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:25.247 06:44:29 -- scripts/common.sh@393 -- # pt= 00:05:25.247 06:44:29 -- scripts/common.sh@394 -- # return 1 00:05:25.247 06:44:29 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:25.247 1+0 records in 00:05:25.247 1+0 records out 00:05:25.247 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00469979 s, 223 MB/s 00:05:25.247 06:44:29 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:25.247 06:44:29 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:05:25.247 06:44:29 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n2 00:05:25.247 06:44:29 -- scripts/common.sh@380 -- # local block=/dev/nvme1n2 pt 00:05:25.247 06:44:29 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:25.247 No valid GPT data, bailing 00:05:25.247 06:44:29 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:25.247 06:44:29 -- scripts/common.sh@393 -- # pt= 00:05:25.247 06:44:29 -- scripts/common.sh@394 -- # return 1 00:05:25.247 06:44:29 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:25.507 1+0 records in 00:05:25.507 1+0 records out 00:05:25.507 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00457211 s, 229 MB/s 00:05:25.507 06:44:29 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:25.507 06:44:29 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:05:25.507 06:44:29 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n3 00:05:25.507 06:44:29 -- scripts/common.sh@380 -- # local block=/dev/nvme1n3 pt 00:05:25.507 06:44:29 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:25.507 No valid GPT data, bailing 00:05:25.507 06:44:29 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:25.507 06:44:29 -- scripts/common.sh@393 -- # pt= 00:05:25.507 06:44:29 -- scripts/common.sh@394 -- # return 1 00:05:25.507 06:44:29 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:25.507 1+0 records in 00:05:25.507 1+0 records out 00:05:25.507 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00432153 s, 243 MB/s 00:05:25.507 06:44:29 -- spdk/autotest.sh@116 -- # sync 00:05:25.765 06:44:30 -- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:25.765 06:44:30 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:25.765 06:44:30 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:27.694 06:44:32 -- spdk/autotest.sh@122 -- # uname -s 00:05:27.694 06:44:32 -- spdk/autotest.sh@122 -- # '[' Linux = Linux ']' 00:05:27.694 06:44:32 -- spdk/autotest.sh@123 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:27.694 06:44:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:27.694 06:44:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:27.694 06:44:32 -- common/autotest_common.sh@10 -- # set +x 00:05:27.694 ************************************ 00:05:27.694 START TEST setup.sh 00:05:27.694 ************************************ 00:05:27.694 06:44:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:27.694 * Looking for test storage... 00:05:27.694 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:27.694 06:44:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:27.694 06:44:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:27.694 06:44:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:27.953 06:44:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:27.953 06:44:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:27.953 06:44:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:27.953 06:44:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:27.953 06:44:32 -- scripts/common.sh@335 -- # IFS=.-: 00:05:27.953 06:44:32 -- scripts/common.sh@335 -- # read -ra ver1 00:05:27.953 06:44:32 -- scripts/common.sh@336 -- # IFS=.-: 00:05:27.953 06:44:32 -- scripts/common.sh@336 -- # read -ra ver2 00:05:27.953 06:44:32 -- scripts/common.sh@337 -- # local 'op=<' 00:05:27.953 06:44:32 -- scripts/common.sh@339 -- # ver1_l=2 00:05:27.953 06:44:32 -- scripts/common.sh@340 -- # ver2_l=1 00:05:27.953 06:44:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:27.953 06:44:32 -- scripts/common.sh@343 -- # case "$op" in 00:05:27.953 06:44:32 -- scripts/common.sh@344 -- # : 1 00:05:27.953 06:44:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:27.953 06:44:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:27.953 06:44:32 -- scripts/common.sh@364 -- # decimal 1 00:05:27.953 06:44:32 -- scripts/common.sh@352 -- # local d=1 00:05:27.953 06:44:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:27.953 06:44:32 -- scripts/common.sh@354 -- # echo 1 00:05:27.953 06:44:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:27.953 06:44:32 -- scripts/common.sh@365 -- # decimal 2 00:05:27.953 06:44:32 -- scripts/common.sh@352 -- # local d=2 00:05:27.953 06:44:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:27.953 06:44:32 -- scripts/common.sh@354 -- # echo 2 00:05:27.953 06:44:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:27.953 06:44:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:27.953 06:44:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:27.953 06:44:32 -- scripts/common.sh@367 -- # return 0 00:05:27.953 06:44:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:27.953 06:44:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:27.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.953 --rc genhtml_branch_coverage=1 00:05:27.953 --rc genhtml_function_coverage=1 00:05:27.953 --rc genhtml_legend=1 00:05:27.953 --rc geninfo_all_blocks=1 00:05:27.953 --rc geninfo_unexecuted_blocks=1 00:05:27.953 00:05:27.953 ' 00:05:27.953 06:44:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:27.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.953 --rc genhtml_branch_coverage=1 00:05:27.953 --rc genhtml_function_coverage=1 00:05:27.953 --rc genhtml_legend=1 00:05:27.953 --rc geninfo_all_blocks=1 00:05:27.953 --rc geninfo_unexecuted_blocks=1 00:05:27.953 00:05:27.953 ' 00:05:27.953 06:44:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:27.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.953 --rc genhtml_branch_coverage=1 00:05:27.953 --rc genhtml_function_coverage=1 00:05:27.953 --rc genhtml_legend=1 00:05:27.953 --rc geninfo_all_blocks=1 00:05:27.953 --rc geninfo_unexecuted_blocks=1 00:05:27.953 00:05:27.953 ' 00:05:27.953 06:44:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:27.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.953 --rc genhtml_branch_coverage=1 00:05:27.953 --rc genhtml_function_coverage=1 00:05:27.953 --rc genhtml_legend=1 00:05:27.953 --rc geninfo_all_blocks=1 00:05:27.953 --rc geninfo_unexecuted_blocks=1 00:05:27.953 00:05:27.953 ' 00:05:27.953 06:44:32 -- setup/test-setup.sh@10 -- # uname -s 00:05:27.953 06:44:32 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:27.953 06:44:32 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:27.953 06:44:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:27.953 06:44:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:27.953 06:44:32 -- common/autotest_common.sh@10 -- # set +x 00:05:27.953 ************************************ 00:05:27.953 START TEST acl 00:05:27.953 ************************************ 00:05:27.953 06:44:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:27.953 * Looking for test storage... 00:05:27.953 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:27.953 06:44:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:27.953 06:44:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:27.953 06:44:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:28.212 06:44:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:28.212 06:44:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:28.212 06:44:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:28.212 06:44:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:28.212 06:44:32 -- scripts/common.sh@335 -- # IFS=.-: 00:05:28.212 06:44:32 -- scripts/common.sh@335 -- # read -ra ver1 00:05:28.212 06:44:32 -- scripts/common.sh@336 -- # IFS=.-: 00:05:28.212 06:44:32 -- scripts/common.sh@336 -- # read -ra ver2 00:05:28.212 06:44:32 -- scripts/common.sh@337 -- # local 'op=<' 00:05:28.212 06:44:32 -- scripts/common.sh@339 -- # ver1_l=2 00:05:28.212 06:44:32 -- scripts/common.sh@340 -- # ver2_l=1 00:05:28.212 06:44:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:28.212 06:44:32 -- scripts/common.sh@343 -- # case "$op" in 00:05:28.212 06:44:32 -- scripts/common.sh@344 -- # : 1 00:05:28.212 06:44:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:28.212 06:44:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:28.212 06:44:32 -- scripts/common.sh@364 -- # decimal 1 00:05:28.212 06:44:32 -- scripts/common.sh@352 -- # local d=1 00:05:28.212 06:44:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:28.212 06:44:32 -- scripts/common.sh@354 -- # echo 1 00:05:28.212 06:44:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:28.212 06:44:32 -- scripts/common.sh@365 -- # decimal 2 00:05:28.212 06:44:32 -- scripts/common.sh@352 -- # local d=2 00:05:28.213 06:44:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:28.213 06:44:32 -- scripts/common.sh@354 -- # echo 2 00:05:28.213 06:44:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:28.213 06:44:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:28.213 06:44:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:28.213 06:44:32 -- scripts/common.sh@367 -- # return 0 00:05:28.213 06:44:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:28.213 06:44:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:28.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.213 --rc genhtml_branch_coverage=1 00:05:28.213 --rc genhtml_function_coverage=1 00:05:28.213 --rc genhtml_legend=1 00:05:28.213 --rc geninfo_all_blocks=1 00:05:28.213 --rc geninfo_unexecuted_blocks=1 00:05:28.213 00:05:28.213 ' 00:05:28.213 06:44:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:28.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.213 --rc genhtml_branch_coverage=1 00:05:28.213 --rc genhtml_function_coverage=1 00:05:28.213 --rc genhtml_legend=1 00:05:28.213 --rc geninfo_all_blocks=1 00:05:28.213 --rc geninfo_unexecuted_blocks=1 00:05:28.213 00:05:28.213 ' 00:05:28.213 06:44:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:28.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.213 --rc genhtml_branch_coverage=1 00:05:28.213 --rc genhtml_function_coverage=1 00:05:28.213 --rc genhtml_legend=1 00:05:28.213 --rc geninfo_all_blocks=1 00:05:28.213 --rc geninfo_unexecuted_blocks=1 00:05:28.213 00:05:28.213 ' 00:05:28.213 06:44:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:28.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.213 --rc genhtml_branch_coverage=1 00:05:28.213 --rc genhtml_function_coverage=1 00:05:28.213 --rc genhtml_legend=1 00:05:28.213 --rc geninfo_all_blocks=1 00:05:28.213 --rc geninfo_unexecuted_blocks=1 00:05:28.213 00:05:28.213 ' 00:05:28.213 06:44:32 -- setup/acl.sh@10 -- # get_zoned_devs 00:05:28.213 06:44:32 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:05:28.213 06:44:32 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:05:28.213 06:44:32 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:05:28.213 06:44:32 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:28.213 06:44:32 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:05:28.213 06:44:32 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:05:28.213 06:44:32 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:28.213 06:44:32 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:28.213 06:44:32 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:28.213 06:44:32 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:05:28.213 06:44:32 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:05:28.213 06:44:32 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:28.213 06:44:32 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:28.213 06:44:32 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:28.213 06:44:32 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:05:28.213 06:44:32 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:05:28.213 06:44:32 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:28.213 06:44:32 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:28.213 06:44:32 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:28.213 06:44:32 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:05:28.213 06:44:32 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:05:28.213 06:44:32 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:28.213 06:44:32 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:28.213 06:44:32 -- setup/acl.sh@12 -- # devs=() 00:05:28.213 06:44:32 -- setup/acl.sh@12 -- # declare -a devs 00:05:28.213 06:44:32 -- setup/acl.sh@13 -- # drivers=() 00:05:28.213 06:44:32 -- setup/acl.sh@13 -- # declare -A drivers 00:05:28.213 06:44:32 -- setup/acl.sh@51 -- # setup reset 00:05:28.213 06:44:32 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:28.213 06:44:32 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:28.780 06:44:33 -- setup/acl.sh@52 -- # collect_setup_devs 00:05:28.780 06:44:33 -- setup/acl.sh@16 -- # local dev driver 00:05:28.781 06:44:33 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:28.781 06:44:33 -- setup/acl.sh@15 -- # setup output status 00:05:28.781 06:44:33 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:28.781 06:44:33 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:29.040 Hugepages 00:05:29.040 node hugesize free / total 00:05:29.040 06:44:33 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:29.040 06:44:33 -- setup/acl.sh@19 -- # continue 00:05:29.040 06:44:33 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:29.040 00:05:29.040 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:29.040 06:44:33 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:29.040 06:44:33 -- setup/acl.sh@19 -- # continue 00:05:29.040 06:44:33 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:29.040 06:44:33 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:05:29.040 06:44:33 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:05:29.040 06:44:33 -- setup/acl.sh@20 -- # continue 00:05:29.040 06:44:33 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:29.040 06:44:33 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:05:29.040 06:44:33 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:29.040 06:44:33 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:05:29.040 06:44:33 -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:29.040 06:44:33 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:29.040 06:44:33 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:29.299 06:44:33 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:05:29.299 06:44:33 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:29.299 06:44:33 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:29.299 06:44:33 -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:29.299 06:44:33 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:29.299 06:44:33 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:29.299 06:44:33 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:05:29.299 06:44:33 -- setup/acl.sh@54 -- # run_test denied denied 00:05:29.299 06:44:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:29.299 06:44:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:29.299 06:44:33 -- common/autotest_common.sh@10 -- # set +x 00:05:29.299 ************************************ 00:05:29.299 START TEST denied 00:05:29.299 ************************************ 00:05:29.299 06:44:33 -- common/autotest_common.sh@1114 -- # denied 00:05:29.299 06:44:33 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:05:29.299 06:44:33 -- setup/acl.sh@38 -- # setup output config 00:05:29.299 06:44:33 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:05:29.299 06:44:33 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:29.299 06:44:33 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:30.235 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:05:30.235 06:44:34 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:05:30.235 06:44:34 -- setup/acl.sh@28 -- # local dev driver 00:05:30.235 06:44:34 -- setup/acl.sh@30 -- # for dev in "$@" 00:05:30.235 06:44:34 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:05:30.235 06:44:34 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:05:30.235 06:44:34 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:30.235 06:44:34 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:30.235 06:44:34 -- setup/acl.sh@41 -- # setup reset 00:05:30.235 06:44:34 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:30.235 06:44:34 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:30.802 ************************************ 00:05:30.802 END TEST denied 00:05:30.802 ************************************ 00:05:30.802 00:05:30.802 real 0m1.439s 00:05:30.802 user 0m0.563s 00:05:30.802 sys 0m0.796s 00:05:30.802 06:44:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:30.802 06:44:35 -- common/autotest_common.sh@10 -- # set +x 00:05:30.802 06:44:35 -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:30.802 06:44:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:30.802 06:44:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:30.802 06:44:35 -- common/autotest_common.sh@10 -- # set +x 00:05:30.802 ************************************ 00:05:30.802 START TEST allowed 00:05:30.802 ************************************ 00:05:30.802 06:44:35 -- common/autotest_common.sh@1114 -- # allowed 00:05:30.802 06:44:35 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:05:30.802 06:44:35 -- setup/acl.sh@45 -- # setup output config 00:05:30.802 06:44:35 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:30.802 06:44:35 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:30.802 06:44:35 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:05:31.369 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:31.369 06:44:35 -- setup/acl.sh@47 -- # verify 0000:00:07.0 00:05:31.369 06:44:35 -- setup/acl.sh@28 -- # local dev driver 00:05:31.369 06:44:35 -- setup/acl.sh@30 -- # for dev in "$@" 00:05:31.369 06:44:35 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:05:31.369 06:44:35 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:05:31.369 06:44:35 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:31.369 06:44:35 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:31.369 06:44:35 -- setup/acl.sh@48 -- # setup reset 00:05:31.369 06:44:35 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:31.369 06:44:35 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:32.307 00:05:32.307 real 0m1.498s 00:05:32.307 user 0m0.679s 00:05:32.307 sys 0m0.815s 00:05:32.307 06:44:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:32.307 ************************************ 00:05:32.307 END TEST allowed 00:05:32.307 ************************************ 00:05:32.307 06:44:36 -- common/autotest_common.sh@10 -- # set +x 00:05:32.307 ************************************ 00:05:32.307 END TEST acl 00:05:32.307 ************************************ 00:05:32.307 00:05:32.307 real 0m4.287s 00:05:32.307 user 0m1.880s 00:05:32.307 sys 0m2.346s 00:05:32.307 06:44:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:32.307 06:44:36 -- common/autotest_common.sh@10 -- # set +x 00:05:32.307 06:44:36 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:32.307 06:44:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:32.307 06:44:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:32.307 06:44:36 -- common/autotest_common.sh@10 -- # set +x 00:05:32.307 ************************************ 00:05:32.307 START TEST hugepages 00:05:32.307 ************************************ 00:05:32.307 06:44:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:32.307 * Looking for test storage... 00:05:32.307 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:32.307 06:44:36 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:32.307 06:44:36 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:32.307 06:44:36 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:32.307 06:44:36 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:32.307 06:44:36 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:32.307 06:44:36 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:32.307 06:44:36 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:32.307 06:44:36 -- scripts/common.sh@335 -- # IFS=.-: 00:05:32.307 06:44:36 -- scripts/common.sh@335 -- # read -ra ver1 00:05:32.307 06:44:36 -- scripts/common.sh@336 -- # IFS=.-: 00:05:32.307 06:44:36 -- scripts/common.sh@336 -- # read -ra ver2 00:05:32.307 06:44:36 -- scripts/common.sh@337 -- # local 'op=<' 00:05:32.307 06:44:36 -- scripts/common.sh@339 -- # ver1_l=2 00:05:32.307 06:44:36 -- scripts/common.sh@340 -- # ver2_l=1 00:05:32.307 06:44:36 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:32.307 06:44:36 -- scripts/common.sh@343 -- # case "$op" in 00:05:32.307 06:44:36 -- scripts/common.sh@344 -- # : 1 00:05:32.307 06:44:36 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:32.307 06:44:36 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:32.307 06:44:36 -- scripts/common.sh@364 -- # decimal 1 00:05:32.307 06:44:36 -- scripts/common.sh@352 -- # local d=1 00:05:32.307 06:44:36 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:32.307 06:44:36 -- scripts/common.sh@354 -- # echo 1 00:05:32.567 06:44:36 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:32.567 06:44:36 -- scripts/common.sh@365 -- # decimal 2 00:05:32.567 06:44:36 -- scripts/common.sh@352 -- # local d=2 00:05:32.567 06:44:36 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:32.567 06:44:36 -- scripts/common.sh@354 -- # echo 2 00:05:32.567 06:44:36 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:32.567 06:44:36 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:32.567 06:44:36 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:32.567 06:44:36 -- scripts/common.sh@367 -- # return 0 00:05:32.567 06:44:36 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:32.567 06:44:36 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:32.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.567 --rc genhtml_branch_coverage=1 00:05:32.567 --rc genhtml_function_coverage=1 00:05:32.567 --rc genhtml_legend=1 00:05:32.567 --rc geninfo_all_blocks=1 00:05:32.567 --rc geninfo_unexecuted_blocks=1 00:05:32.567 00:05:32.567 ' 00:05:32.567 06:44:36 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:32.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.567 --rc genhtml_branch_coverage=1 00:05:32.567 --rc genhtml_function_coverage=1 00:05:32.567 --rc genhtml_legend=1 00:05:32.567 --rc geninfo_all_blocks=1 00:05:32.567 --rc geninfo_unexecuted_blocks=1 00:05:32.567 00:05:32.567 ' 00:05:32.567 06:44:36 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:32.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.567 --rc genhtml_branch_coverage=1 00:05:32.567 --rc genhtml_function_coverage=1 00:05:32.567 --rc genhtml_legend=1 00:05:32.567 --rc geninfo_all_blocks=1 00:05:32.567 --rc geninfo_unexecuted_blocks=1 00:05:32.567 00:05:32.567 ' 00:05:32.567 06:44:36 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:32.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.567 --rc genhtml_branch_coverage=1 00:05:32.567 --rc genhtml_function_coverage=1 00:05:32.567 --rc genhtml_legend=1 00:05:32.567 --rc geninfo_all_blocks=1 00:05:32.567 --rc geninfo_unexecuted_blocks=1 00:05:32.567 00:05:32.567 ' 00:05:32.567 06:44:36 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:32.567 06:44:36 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:32.567 06:44:36 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:32.567 06:44:36 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:32.567 06:44:36 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:32.567 06:44:36 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:32.567 06:44:36 -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:32.567 06:44:36 -- setup/common.sh@18 -- # local node= 00:05:32.567 06:44:36 -- setup/common.sh@19 -- # local var val 00:05:32.567 06:44:36 -- setup/common.sh@20 -- # local mem_f mem 00:05:32.567 06:44:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:32.567 06:44:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:32.567 06:44:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:32.567 06:44:36 -- setup/common.sh@28 -- # mapfile -t mem 00:05:32.567 06:44:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:32.567 06:44:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.567 06:44:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 4565180 kB' 'MemAvailable: 7359532 kB' 'Buffers: 3704 kB' 'Cached: 2996876 kB' 'SwapCached: 0 kB' 'Active: 455752 kB' 'Inactive: 2662224 kB' 'Active(anon): 127908 kB' 'Inactive(anon): 0 kB' 'Active(file): 327844 kB' 'Inactive(file): 2662224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 119076 kB' 'Mapped: 51172 kB' 'Shmem: 10512 kB' 'KReclaimable: 82872 kB' 'Slab: 183588 kB' 'SReclaimable: 82872 kB' 'SUnreclaim: 100716 kB' 'KernelStack: 6832 kB' 'PageTables: 4716 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12411004 kB' 'Committed_AS: 322220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55496 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 6090752 kB' 'DirectMap1G: 8388608 kB' 00:05:32.567 06:44:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.567 06:44:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:32.567 06:44:36 -- setup/common.sh@32 -- # continue 00:05:32.567 06:44:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.567 06:44:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.567 06:44:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:32.567 06:44:36 -- setup/common.sh@32 -- # continue 00:05:32.567 06:44:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.567 06:44:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # continue 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # continue 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # continue 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # continue 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # continue 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # continue 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # continue 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # continue 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # continue 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # continue 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # continue 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # continue 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # continue 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # continue 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # continue 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # continue 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # continue 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # continue 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # continue 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # continue 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # continue 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # continue 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # continue 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # continue 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # continue 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # continue 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # continue 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # continue 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # continue 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # continue 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # continue 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # continue 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # continue 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # continue 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # continue 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # continue 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # continue 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # continue 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # continue 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # continue 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # continue 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:32.568 06:44:36 -- setup/common.sh@32 -- # continue 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.568 06:44:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.569 06:44:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:32.569 06:44:36 -- setup/common.sh@32 -- # continue 00:05:32.569 06:44:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.569 06:44:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.569 06:44:36 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:32.569 06:44:36 -- setup/common.sh@32 -- # continue 00:05:32.569 06:44:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.569 06:44:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.569 06:44:36 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:32.569 06:44:36 -- setup/common.sh@32 -- # continue 00:05:32.569 06:44:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.569 06:44:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.569 06:44:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:32.569 06:44:36 -- setup/common.sh@32 -- # continue 00:05:32.569 06:44:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.569 06:44:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.569 06:44:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:32.569 06:44:36 -- setup/common.sh@32 -- # continue 00:05:32.569 06:44:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.569 06:44:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.569 06:44:36 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:32.569 06:44:36 -- setup/common.sh@32 -- # continue 00:05:32.569 06:44:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.569 06:44:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.569 06:44:36 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:32.569 06:44:36 -- setup/common.sh@32 -- # continue 00:05:32.569 06:44:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.569 06:44:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.569 06:44:36 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:32.569 06:44:36 -- setup/common.sh@32 -- # continue 00:05:32.569 06:44:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.569 06:44:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.569 06:44:36 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:32.569 06:44:36 -- setup/common.sh@33 -- # echo 2048 00:05:32.569 06:44:36 -- setup/common.sh@33 -- # return 0 00:05:32.569 06:44:36 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:32.569 06:44:36 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:32.569 06:44:36 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:32.569 06:44:36 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:32.569 06:44:36 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:32.569 06:44:36 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:32.569 06:44:36 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:32.569 06:44:36 -- setup/hugepages.sh@207 -- # get_nodes 00:05:32.569 06:44:36 -- setup/hugepages.sh@27 -- # local node 00:05:32.569 06:44:36 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:32.569 06:44:36 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:32.569 06:44:36 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:32.569 06:44:36 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:32.569 06:44:36 -- setup/hugepages.sh@208 -- # clear_hp 00:05:32.569 06:44:36 -- setup/hugepages.sh@37 -- # local node hp 00:05:32.569 06:44:36 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:32.569 06:44:36 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:32.569 06:44:36 -- setup/hugepages.sh@41 -- # echo 0 00:05:32.569 06:44:36 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:32.569 06:44:36 -- setup/hugepages.sh@41 -- # echo 0 00:05:32.569 06:44:36 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:32.569 06:44:36 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:32.569 06:44:36 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:32.569 06:44:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:32.569 06:44:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:32.569 06:44:36 -- common/autotest_common.sh@10 -- # set +x 00:05:32.569 ************************************ 00:05:32.569 START TEST default_setup 00:05:32.569 ************************************ 00:05:32.569 06:44:36 -- common/autotest_common.sh@1114 -- # default_setup 00:05:32.569 06:44:36 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:32.569 06:44:36 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:32.569 06:44:36 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:32.569 06:44:36 -- setup/hugepages.sh@51 -- # shift 00:05:32.569 06:44:36 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:32.569 06:44:36 -- setup/hugepages.sh@52 -- # local node_ids 00:05:32.569 06:44:36 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:32.569 06:44:36 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:32.569 06:44:36 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:32.569 06:44:36 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:32.569 06:44:36 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:32.569 06:44:36 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:32.569 06:44:36 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:32.569 06:44:36 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:32.569 06:44:36 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:32.569 06:44:36 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:32.569 06:44:36 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:32.569 06:44:36 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:32.569 06:44:36 -- setup/hugepages.sh@73 -- # return 0 00:05:32.569 06:44:36 -- setup/hugepages.sh@137 -- # setup output 00:05:32.569 06:44:36 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:32.569 06:44:36 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:33.137 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:33.397 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:33.398 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:33.398 06:44:37 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:33.398 06:44:37 -- setup/hugepages.sh@89 -- # local node 00:05:33.398 06:44:37 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:33.398 06:44:37 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:33.398 06:44:37 -- setup/hugepages.sh@92 -- # local surp 00:05:33.398 06:44:37 -- setup/hugepages.sh@93 -- # local resv 00:05:33.398 06:44:37 -- setup/hugepages.sh@94 -- # local anon 00:05:33.398 06:44:37 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:33.398 06:44:37 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:33.398 06:44:37 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:33.398 06:44:37 -- setup/common.sh@18 -- # local node= 00:05:33.398 06:44:37 -- setup/common.sh@19 -- # local var val 00:05:33.398 06:44:37 -- setup/common.sh@20 -- # local mem_f mem 00:05:33.398 06:44:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:33.398 06:44:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:33.398 06:44:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:33.398 06:44:37 -- setup/common.sh@28 -- # mapfile -t mem 00:05:33.398 06:44:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:33.398 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.398 06:44:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6685680 kB' 'MemAvailable: 9479868 kB' 'Buffers: 3704 kB' 'Cached: 2996868 kB' 'SwapCached: 0 kB' 'Active: 457052 kB' 'Inactive: 2662228 kB' 'Active(anon): 129208 kB' 'Inactive(anon): 0 kB' 'Active(file): 327844 kB' 'Inactive(file): 2662228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 120296 kB' 'Mapped: 51336 kB' 'Shmem: 10488 kB' 'KReclaimable: 82536 kB' 'Slab: 183216 kB' 'SReclaimable: 82536 kB' 'SUnreclaim: 100680 kB' 'KernelStack: 6784 kB' 'PageTables: 4572 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 324356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55528 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 6090752 kB' 'DirectMap1G: 8388608 kB' 00:05:33.398 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.398 06:44:37 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.398 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.398 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.398 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.398 06:44:37 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.398 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.398 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.398 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.398 06:44:37 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.398 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.398 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.398 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.398 06:44:37 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.398 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.398 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.398 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.398 06:44:37 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.398 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.398 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.398 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.398 06:44:37 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.398 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.398 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.398 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.398 06:44:37 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.398 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.398 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.398 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.398 06:44:37 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.398 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.398 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.398 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.398 06:44:37 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.398 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.398 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.398 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.398 06:44:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.398 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.398 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.398 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.398 06:44:37 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.398 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.398 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.398 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.398 06:44:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.398 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.398 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.398 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.398 06:44:37 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.398 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.398 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.398 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.398 06:44:37 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.398 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.398 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.398 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.398 06:44:37 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.398 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.398 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.398 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.398 06:44:37 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.398 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.398 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.398 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.398 06:44:37 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.398 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.398 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.398 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.398 06:44:37 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.398 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.398 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.398 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.398 06:44:37 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.398 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.398 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.398 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.398 06:44:37 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.398 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.398 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.398 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.398 06:44:37 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.398 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.398 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.398 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.398 06:44:37 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.398 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.398 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.398 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.398 06:44:37 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.398 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.398 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.398 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.398 06:44:37 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.398 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.398 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.398 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.398 06:44:37 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.398 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.398 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.398 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.398 06:44:37 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.398 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.398 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.398 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.398 06:44:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.398 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.398 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.398 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.398 06:44:37 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.398 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.398 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.398 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.398 06:44:37 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.398 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.398 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.398 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.399 06:44:37 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.399 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.399 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.399 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.399 06:44:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.399 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.399 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.399 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.399 06:44:37 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.399 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.399 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.399 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.399 06:44:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.399 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.399 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.399 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.399 06:44:37 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.399 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.399 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.399 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.399 06:44:37 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.399 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.399 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.399 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.399 06:44:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.399 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.399 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.399 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.399 06:44:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.399 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.399 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.399 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.399 06:44:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.399 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.399 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.399 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.399 06:44:37 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.399 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.399 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.399 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.399 06:44:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.399 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.399 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.399 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.399 06:44:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.399 06:44:37 -- setup/common.sh@33 -- # echo 0 00:05:33.399 06:44:37 -- setup/common.sh@33 -- # return 0 00:05:33.399 06:44:37 -- setup/hugepages.sh@97 -- # anon=0 00:05:33.399 06:44:37 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:33.399 06:44:37 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:33.399 06:44:37 -- setup/common.sh@18 -- # local node= 00:05:33.399 06:44:37 -- setup/common.sh@19 -- # local var val 00:05:33.399 06:44:37 -- setup/common.sh@20 -- # local mem_f mem 00:05:33.399 06:44:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:33.399 06:44:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:33.399 06:44:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:33.399 06:44:37 -- setup/common.sh@28 -- # mapfile -t mem 00:05:33.399 06:44:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:33.399 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.399 06:44:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6685680 kB' 'MemAvailable: 9479868 kB' 'Buffers: 3704 kB' 'Cached: 2996868 kB' 'SwapCached: 0 kB' 'Active: 456724 kB' 'Inactive: 2662228 kB' 'Active(anon): 128880 kB' 'Inactive(anon): 0 kB' 'Active(file): 327844 kB' 'Inactive(file): 2662228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 120004 kB' 'Mapped: 50988 kB' 'Shmem: 10488 kB' 'KReclaimable: 82536 kB' 'Slab: 183204 kB' 'SReclaimable: 82536 kB' 'SUnreclaim: 100668 kB' 'KernelStack: 6768 kB' 'PageTables: 4512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 324356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 6090752 kB' 'DirectMap1G: 8388608 kB' 00:05:33.399 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.399 06:44:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.399 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.399 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.399 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.399 06:44:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.399 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.399 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.399 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.399 06:44:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.399 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.399 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.399 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.399 06:44:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.399 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.399 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.399 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.399 06:44:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.399 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.399 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.399 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.399 06:44:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.399 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.399 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.399 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.399 06:44:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.399 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.399 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.399 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.399 06:44:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.399 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.399 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.399 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.399 06:44:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.399 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.399 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.399 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.399 06:44:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.399 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.399 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.399 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.399 06:44:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.399 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.399 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.399 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.399 06:44:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.399 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.399 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.399 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.399 06:44:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.399 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.399 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.399 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.399 06:44:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.399 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.399 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.399 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.399 06:44:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.399 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.399 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.399 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.399 06:44:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.399 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.399 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.399 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.399 06:44:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.399 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.399 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.399 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.399 06:44:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.399 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.399 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.399 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.399 06:44:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.399 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.399 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.399 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.399 06:44:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.399 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.400 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.400 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.400 06:44:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.400 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.400 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.400 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.400 06:44:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.400 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.400 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.400 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.400 06:44:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.400 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.400 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.400 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.400 06:44:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.400 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.400 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.400 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.400 06:44:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.400 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.400 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.400 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.400 06:44:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.400 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.400 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.400 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.400 06:44:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.400 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.400 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.400 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.400 06:44:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.400 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.400 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.400 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.400 06:44:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.400 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.400 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.400 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.400 06:44:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.400 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.400 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.400 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.400 06:44:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.400 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.400 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.400 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.400 06:44:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.400 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.400 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.400 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.400 06:44:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.400 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.400 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.400 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.400 06:44:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.400 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.400 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.400 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.400 06:44:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.400 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.400 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.400 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.400 06:44:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.400 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.400 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.400 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.400 06:44:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.400 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.400 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.400 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.400 06:44:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.400 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.400 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.400 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.400 06:44:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.400 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.400 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.400 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.400 06:44:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.400 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.400 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.400 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.400 06:44:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.400 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.400 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.400 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.400 06:44:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.400 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.400 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.400 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.400 06:44:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.400 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.400 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.400 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.400 06:44:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.400 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.400 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.400 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.400 06:44:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.400 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.400 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.400 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.400 06:44:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.400 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.400 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.400 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.400 06:44:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.400 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.400 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.400 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.400 06:44:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.400 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.400 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.400 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.400 06:44:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.400 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.400 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.400 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.400 06:44:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.400 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.400 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.400 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.400 06:44:37 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.400 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.400 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.400 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.400 06:44:37 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.400 06:44:37 -- setup/common.sh@33 -- # echo 0 00:05:33.400 06:44:37 -- setup/common.sh@33 -- # return 0 00:05:33.400 06:44:37 -- setup/hugepages.sh@99 -- # surp=0 00:05:33.400 06:44:37 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:33.400 06:44:37 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:33.400 06:44:37 -- setup/common.sh@18 -- # local node= 00:05:33.400 06:44:37 -- setup/common.sh@19 -- # local var val 00:05:33.400 06:44:37 -- setup/common.sh@20 -- # local mem_f mem 00:05:33.400 06:44:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:33.400 06:44:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:33.400 06:44:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:33.400 06:44:37 -- setup/common.sh@28 -- # mapfile -t mem 00:05:33.400 06:44:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:33.400 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.400 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.400 06:44:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6685680 kB' 'MemAvailable: 9479868 kB' 'Buffers: 3704 kB' 'Cached: 2996868 kB' 'SwapCached: 0 kB' 'Active: 456504 kB' 'Inactive: 2662228 kB' 'Active(anon): 128660 kB' 'Inactive(anon): 0 kB' 'Active(file): 327844 kB' 'Inactive(file): 2662228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119776 kB' 'Mapped: 50988 kB' 'Shmem: 10488 kB' 'KReclaimable: 82532 kB' 'Slab: 183200 kB' 'SReclaimable: 82532 kB' 'SUnreclaim: 100668 kB' 'KernelStack: 6768 kB' 'PageTables: 4512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 324356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 6090752 kB' 'DirectMap1G: 8388608 kB' 00:05:33.400 06:44:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.400 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.400 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.400 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.400 06:44:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.400 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.400 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.400 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.401 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.401 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.402 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.402 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.402 06:44:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.402 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.402 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.402 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.402 06:44:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.402 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.402 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.402 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.402 06:44:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.402 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.402 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.402 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.402 06:44:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.402 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.402 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.402 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.402 06:44:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.402 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.402 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.402 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.402 06:44:37 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.402 06:44:37 -- setup/common.sh@33 -- # echo 0 00:05:33.402 06:44:37 -- setup/common.sh@33 -- # return 0 00:05:33.402 06:44:37 -- setup/hugepages.sh@100 -- # resv=0 00:05:33.402 nr_hugepages=1024 00:05:33.402 06:44:37 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:33.402 resv_hugepages=0 00:05:33.402 06:44:37 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:33.402 surplus_hugepages=0 00:05:33.402 06:44:37 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:33.402 anon_hugepages=0 00:05:33.402 06:44:37 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:33.402 06:44:37 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:33.402 06:44:37 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:33.402 06:44:37 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:33.402 06:44:37 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:33.402 06:44:37 -- setup/common.sh@18 -- # local node= 00:05:33.402 06:44:37 -- setup/common.sh@19 -- # local var val 00:05:33.402 06:44:37 -- setup/common.sh@20 -- # local mem_f mem 00:05:33.402 06:44:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:33.402 06:44:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:33.402 06:44:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:33.402 06:44:37 -- setup/common.sh@28 -- # mapfile -t mem 00:05:33.402 06:44:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:33.402 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.402 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.402 06:44:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6685680 kB' 'MemAvailable: 9479868 kB' 'Buffers: 3704 kB' 'Cached: 2996868 kB' 'SwapCached: 0 kB' 'Active: 456744 kB' 'Inactive: 2662228 kB' 'Active(anon): 128900 kB' 'Inactive(anon): 0 kB' 'Active(file): 327844 kB' 'Inactive(file): 2662228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 120016 kB' 'Mapped: 50988 kB' 'Shmem: 10488 kB' 'KReclaimable: 82532 kB' 'Slab: 183192 kB' 'SReclaimable: 82532 kB' 'SUnreclaim: 100660 kB' 'KernelStack: 6768 kB' 'PageTables: 4512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 324356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 6090752 kB' 'DirectMap1G: 8388608 kB' 00:05:33.402 06:44:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.402 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.402 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.402 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.402 06:44:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.402 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.402 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.402 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.402 06:44:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.402 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.402 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.402 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.402 06:44:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.402 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.402 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.402 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.402 06:44:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.402 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.402 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.402 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.402 06:44:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.402 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.402 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.402 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.402 06:44:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.402 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.402 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.402 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.402 06:44:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.402 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.402 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.402 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.402 06:44:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.402 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.402 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.402 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.402 06:44:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.402 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.402 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.402 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.402 06:44:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.402 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.402 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.402 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.402 06:44:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.402 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.402 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.402 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.402 06:44:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.402 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.402 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.402 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.402 06:44:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.402 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.402 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.402 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.402 06:44:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.402 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.402 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.403 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.403 06:44:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.403 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.403 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.403 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.403 06:44:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.403 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.403 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.403 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.403 06:44:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.403 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.403 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.403 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.403 06:44:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.403 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.403 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.403 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.403 06:44:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.403 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.403 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.403 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.403 06:44:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.403 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.403 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.403 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.403 06:44:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.403 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.403 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.661 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.661 06:44:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.661 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.661 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.661 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.661 06:44:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.661 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.661 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.661 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.661 06:44:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.661 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.661 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.661 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.661 06:44:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.661 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.661 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.661 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.661 06:44:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.661 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.661 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.661 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.661 06:44:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.661 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.661 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.661 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.661 06:44:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.661 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.661 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.661 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.661 06:44:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.661 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.661 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.661 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.661 06:44:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.661 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.661 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.661 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.661 06:44:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.661 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.661 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.661 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.661 06:44:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.661 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.661 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.661 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.661 06:44:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.661 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.661 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.661 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.661 06:44:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.661 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.661 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.661 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.661 06:44:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.661 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.661 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.661 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.661 06:44:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.661 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.661 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.661 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.661 06:44:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.661 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.661 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.661 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.662 06:44:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.662 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.662 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.662 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.662 06:44:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.662 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.662 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.662 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.662 06:44:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.662 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.662 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.662 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.662 06:44:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.662 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.662 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.662 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.662 06:44:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.662 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.662 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.662 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.662 06:44:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.662 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.662 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.662 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.662 06:44:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.662 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.662 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.662 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.662 06:44:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.662 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.662 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.662 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.662 06:44:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.662 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.662 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.662 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.662 06:44:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.662 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.662 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.662 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.662 06:44:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.662 06:44:37 -- setup/common.sh@33 -- # echo 1024 00:05:33.662 06:44:37 -- setup/common.sh@33 -- # return 0 00:05:33.662 06:44:37 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:33.662 06:44:37 -- setup/hugepages.sh@112 -- # get_nodes 00:05:33.662 06:44:37 -- setup/hugepages.sh@27 -- # local node 00:05:33.662 06:44:37 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:33.662 06:44:37 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:33.662 06:44:37 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:33.662 06:44:37 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:33.662 06:44:37 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:33.662 06:44:37 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:33.662 06:44:37 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:33.662 06:44:37 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:33.662 06:44:37 -- setup/common.sh@18 -- # local node=0 00:05:33.662 06:44:37 -- setup/common.sh@19 -- # local var val 00:05:33.662 06:44:37 -- setup/common.sh@20 -- # local mem_f mem 00:05:33.662 06:44:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:33.662 06:44:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:33.662 06:44:37 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:33.662 06:44:37 -- setup/common.sh@28 -- # mapfile -t mem 00:05:33.662 06:44:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:33.662 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.662 06:44:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6685680 kB' 'MemUsed: 5553428 kB' 'SwapCached: 0 kB' 'Active: 456716 kB' 'Inactive: 2662228 kB' 'Active(anon): 128872 kB' 'Inactive(anon): 0 kB' 'Active(file): 327844 kB' 'Inactive(file): 2662228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'FilePages: 3000572 kB' 'Mapped: 50988 kB' 'AnonPages: 119984 kB' 'Shmem: 10488 kB' 'KernelStack: 6768 kB' 'PageTables: 4512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82532 kB' 'Slab: 183176 kB' 'SReclaimable: 82532 kB' 'SUnreclaim: 100644 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:33.662 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.662 06:44:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.662 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.662 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.662 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.662 06:44:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.662 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.662 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.662 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.662 06:44:37 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.662 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.662 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.662 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.662 06:44:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.662 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.662 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.662 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.662 06:44:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.662 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.662 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.662 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.662 06:44:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.662 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.662 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.662 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.662 06:44:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.662 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.662 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.662 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.662 06:44:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.662 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.662 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.662 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.662 06:44:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.662 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.662 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.662 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.662 06:44:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.662 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.662 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.662 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.662 06:44:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.662 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.662 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.662 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.662 06:44:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.662 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.662 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.662 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.662 06:44:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.662 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.662 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.662 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.662 06:44:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.662 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.662 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.662 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.662 06:44:37 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.662 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.662 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.662 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.662 06:44:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.662 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.662 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.662 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.662 06:44:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.662 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.662 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.662 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.662 06:44:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.662 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.662 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.662 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.662 06:44:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.662 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.662 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.662 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.662 06:44:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.662 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.662 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.662 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.662 06:44:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.662 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.662 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.662 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.662 06:44:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.663 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.663 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.663 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.663 06:44:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.663 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.663 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.663 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.663 06:44:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.663 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.663 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.663 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.663 06:44:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.663 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.663 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.663 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.663 06:44:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.663 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.663 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.663 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.663 06:44:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.663 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.663 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.663 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.663 06:44:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.663 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.663 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.663 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.663 06:44:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.663 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.663 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.663 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.663 06:44:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.663 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.663 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.663 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.663 06:44:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.663 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.663 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.663 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.663 06:44:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.663 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.663 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.663 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.663 06:44:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.663 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.663 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.663 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.663 06:44:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.663 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.663 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.663 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.663 06:44:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.663 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.663 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.663 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.663 06:44:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.663 06:44:37 -- setup/common.sh@32 -- # continue 00:05:33.663 06:44:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.663 06:44:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.663 06:44:37 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.663 06:44:37 -- setup/common.sh@33 -- # echo 0 00:05:33.663 06:44:37 -- setup/common.sh@33 -- # return 0 00:05:33.663 06:44:37 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:33.663 06:44:37 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:33.663 node0=1024 expecting 1024 00:05:33.663 ************************************ 00:05:33.663 END TEST default_setup 00:05:33.663 ************************************ 00:05:33.663 06:44:37 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:33.663 06:44:37 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:33.663 06:44:37 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:33.663 06:44:37 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:33.663 00:05:33.663 real 0m1.045s 00:05:33.663 user 0m0.472s 00:05:33.663 sys 0m0.445s 00:05:33.663 06:44:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:33.663 06:44:37 -- common/autotest_common.sh@10 -- # set +x 00:05:33.663 06:44:38 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:33.663 06:44:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:33.663 06:44:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:33.663 06:44:38 -- common/autotest_common.sh@10 -- # set +x 00:05:33.663 ************************************ 00:05:33.663 START TEST per_node_1G_alloc 00:05:33.663 ************************************ 00:05:33.663 06:44:38 -- common/autotest_common.sh@1114 -- # per_node_1G_alloc 00:05:33.663 06:44:38 -- setup/hugepages.sh@143 -- # local IFS=, 00:05:33.663 06:44:38 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:05:33.663 06:44:38 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:33.663 06:44:38 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:33.663 06:44:38 -- setup/hugepages.sh@51 -- # shift 00:05:33.663 06:44:38 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:33.663 06:44:38 -- setup/hugepages.sh@52 -- # local node_ids 00:05:33.663 06:44:38 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:33.663 06:44:38 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:33.663 06:44:38 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:33.663 06:44:38 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:33.663 06:44:38 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:33.663 06:44:38 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:33.663 06:44:38 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:33.663 06:44:38 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:33.663 06:44:38 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:33.663 06:44:38 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:33.663 06:44:38 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:33.663 06:44:38 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:33.663 06:44:38 -- setup/hugepages.sh@73 -- # return 0 00:05:33.663 06:44:38 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:33.663 06:44:38 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:05:33.663 06:44:38 -- setup/hugepages.sh@146 -- # setup output 00:05:33.663 06:44:38 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:33.663 06:44:38 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:33.923 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:33.923 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:33.923 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:33.923 06:44:38 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:05:33.923 06:44:38 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:33.923 06:44:38 -- setup/hugepages.sh@89 -- # local node 00:05:33.923 06:44:38 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:33.923 06:44:38 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:33.923 06:44:38 -- setup/hugepages.sh@92 -- # local surp 00:05:33.923 06:44:38 -- setup/hugepages.sh@93 -- # local resv 00:05:33.923 06:44:38 -- setup/hugepages.sh@94 -- # local anon 00:05:33.923 06:44:38 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:33.923 06:44:38 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:33.923 06:44:38 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:33.923 06:44:38 -- setup/common.sh@18 -- # local node= 00:05:33.923 06:44:38 -- setup/common.sh@19 -- # local var val 00:05:33.923 06:44:38 -- setup/common.sh@20 -- # local mem_f mem 00:05:33.923 06:44:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:33.923 06:44:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:33.923 06:44:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:33.923 06:44:38 -- setup/common.sh@28 -- # mapfile -t mem 00:05:33.923 06:44:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:33.923 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.923 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.923 06:44:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7734816 kB' 'MemAvailable: 10529016 kB' 'Buffers: 3704 kB' 'Cached: 2996868 kB' 'SwapCached: 0 kB' 'Active: 456760 kB' 'Inactive: 2662240 kB' 'Active(anon): 128916 kB' 'Inactive(anon): 0 kB' 'Active(file): 327844 kB' 'Inactive(file): 2662240 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 120292 kB' 'Mapped: 51064 kB' 'Shmem: 10488 kB' 'KReclaimable: 82532 kB' 'Slab: 183172 kB' 'SReclaimable: 82532 kB' 'SUnreclaim: 100640 kB' 'KernelStack: 6760 kB' 'PageTables: 4580 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 324356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 6090752 kB' 'DirectMap1G: 8388608 kB' 00:05:33.923 06:44:38 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.923 06:44:38 -- setup/common.sh@32 -- # continue 00:05:33.923 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.923 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.923 06:44:38 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.923 06:44:38 -- setup/common.sh@32 -- # continue 00:05:33.923 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.923 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.923 06:44:38 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.923 06:44:38 -- setup/common.sh@32 -- # continue 00:05:33.923 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.923 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.923 06:44:38 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.923 06:44:38 -- setup/common.sh@32 -- # continue 00:05:33.923 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.923 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.923 06:44:38 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.923 06:44:38 -- setup/common.sh@32 -- # continue 00:05:33.923 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.923 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.923 06:44:38 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.923 06:44:38 -- setup/common.sh@32 -- # continue 00:05:33.923 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.923 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.923 06:44:38 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.923 06:44:38 -- setup/common.sh@32 -- # continue 00:05:33.923 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.923 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.923 06:44:38 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.923 06:44:38 -- setup/common.sh@32 -- # continue 00:05:33.923 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.923 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.923 06:44:38 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.923 06:44:38 -- setup/common.sh@32 -- # continue 00:05:33.923 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.923 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.923 06:44:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.923 06:44:38 -- setup/common.sh@32 -- # continue 00:05:33.923 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.923 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.923 06:44:38 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.923 06:44:38 -- setup/common.sh@32 -- # continue 00:05:33.923 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.923 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.923 06:44:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.923 06:44:38 -- setup/common.sh@32 -- # continue 00:05:33.923 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.923 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.923 06:44:38 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.923 06:44:38 -- setup/common.sh@32 -- # continue 00:05:33.923 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.923 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.923 06:44:38 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.923 06:44:38 -- setup/common.sh@32 -- # continue 00:05:33.923 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.923 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.923 06:44:38 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.923 06:44:38 -- setup/common.sh@32 -- # continue 00:05:33.923 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.923 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.923 06:44:38 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.923 06:44:38 -- setup/common.sh@32 -- # continue 00:05:33.923 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.923 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.923 06:44:38 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.923 06:44:38 -- setup/common.sh@32 -- # continue 00:05:33.923 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.923 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.923 06:44:38 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.923 06:44:38 -- setup/common.sh@32 -- # continue 00:05:33.923 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.923 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.923 06:44:38 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.923 06:44:38 -- setup/common.sh@32 -- # continue 00:05:33.923 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.923 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.923 06:44:38 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.923 06:44:38 -- setup/common.sh@32 -- # continue 00:05:33.923 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.923 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.923 06:44:38 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.923 06:44:38 -- setup/common.sh@32 -- # continue 00:05:33.923 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.923 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.923 06:44:38 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.923 06:44:38 -- setup/common.sh@32 -- # continue 00:05:33.923 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.923 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.923 06:44:38 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.923 06:44:38 -- setup/common.sh@32 -- # continue 00:05:33.923 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.924 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.924 06:44:38 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.924 06:44:38 -- setup/common.sh@32 -- # continue 00:05:33.924 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.924 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.924 06:44:38 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.924 06:44:38 -- setup/common.sh@32 -- # continue 00:05:33.924 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.924 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.924 06:44:38 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.924 06:44:38 -- setup/common.sh@32 -- # continue 00:05:33.924 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.924 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.924 06:44:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.924 06:44:38 -- setup/common.sh@32 -- # continue 00:05:33.924 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.924 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.924 06:44:38 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.924 06:44:38 -- setup/common.sh@32 -- # continue 00:05:33.924 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.924 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.924 06:44:38 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.924 06:44:38 -- setup/common.sh@32 -- # continue 00:05:33.924 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.924 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.924 06:44:38 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.924 06:44:38 -- setup/common.sh@32 -- # continue 00:05:33.924 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.924 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.924 06:44:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.924 06:44:38 -- setup/common.sh@32 -- # continue 00:05:33.924 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.924 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.924 06:44:38 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.924 06:44:38 -- setup/common.sh@32 -- # continue 00:05:33.924 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.924 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.924 06:44:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.924 06:44:38 -- setup/common.sh@32 -- # continue 00:05:33.924 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.924 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.924 06:44:38 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.924 06:44:38 -- setup/common.sh@32 -- # continue 00:05:33.924 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.924 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.924 06:44:38 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.924 06:44:38 -- setup/common.sh@32 -- # continue 00:05:33.924 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.924 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.924 06:44:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.924 06:44:38 -- setup/common.sh@32 -- # continue 00:05:33.924 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.924 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.924 06:44:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.924 06:44:38 -- setup/common.sh@32 -- # continue 00:05:33.924 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.924 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.924 06:44:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.924 06:44:38 -- setup/common.sh@32 -- # continue 00:05:33.924 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.924 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.924 06:44:38 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.924 06:44:38 -- setup/common.sh@32 -- # continue 00:05:33.924 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.924 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.924 06:44:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.924 06:44:38 -- setup/common.sh@32 -- # continue 00:05:33.924 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.924 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.924 06:44:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.924 06:44:38 -- setup/common.sh@33 -- # echo 0 00:05:33.924 06:44:38 -- setup/common.sh@33 -- # return 0 00:05:33.924 06:44:38 -- setup/hugepages.sh@97 -- # anon=0 00:05:33.924 06:44:38 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:33.924 06:44:38 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:33.924 06:44:38 -- setup/common.sh@18 -- # local node= 00:05:33.924 06:44:38 -- setup/common.sh@19 -- # local var val 00:05:33.924 06:44:38 -- setup/common.sh@20 -- # local mem_f mem 00:05:33.924 06:44:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:33.924 06:44:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:33.924 06:44:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:33.924 06:44:38 -- setup/common.sh@28 -- # mapfile -t mem 00:05:33.924 06:44:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:34.186 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.186 06:44:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7734816 kB' 'MemAvailable: 10529016 kB' 'Buffers: 3704 kB' 'Cached: 2996868 kB' 'SwapCached: 0 kB' 'Active: 456780 kB' 'Inactive: 2662240 kB' 'Active(anon): 128936 kB' 'Inactive(anon): 0 kB' 'Active(file): 327844 kB' 'Inactive(file): 2662240 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 120020 kB' 'Mapped: 50988 kB' 'Shmem: 10488 kB' 'KReclaimable: 82532 kB' 'Slab: 183188 kB' 'SReclaimable: 82532 kB' 'SUnreclaim: 100656 kB' 'KernelStack: 6736 kB' 'PageTables: 4380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 324356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55496 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 6090752 kB' 'DirectMap1G: 8388608 kB' 00:05:34.186 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.186 06:44:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.186 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.186 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.186 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.186 06:44:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.186 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.186 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.186 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.186 06:44:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.186 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.186 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.186 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.186 06:44:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.186 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.186 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.186 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.186 06:44:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.186 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.186 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.186 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.186 06:44:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.186 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.186 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.186 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.186 06:44:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.186 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.186 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.186 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.186 06:44:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.186 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.186 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.186 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.186 06:44:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.186 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.186 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.186 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.186 06:44:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.186 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.186 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.186 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.186 06:44:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.186 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.186 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.186 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.186 06:44:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.186 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.186 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.186 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.186 06:44:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.186 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.186 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.186 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.186 06:44:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.186 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.186 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.186 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.186 06:44:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.186 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.186 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.186 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.186 06:44:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.186 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.186 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.186 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.186 06:44:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.186 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.186 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.186 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.186 06:44:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.186 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.186 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.186 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.186 06:44:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.186 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.186 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.186 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.186 06:44:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.186 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.186 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.186 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.186 06:44:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.186 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.186 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.186 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.186 06:44:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.186 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.186 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.186 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.186 06:44:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.186 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.186 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.186 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.186 06:44:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.186 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.186 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.186 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.186 06:44:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.186 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.186 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.186 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.186 06:44:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.186 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.186 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.186 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.186 06:44:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.186 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.186 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.186 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.186 06:44:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.186 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.186 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.186 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.186 06:44:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.186 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.186 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.186 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.186 06:44:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.186 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.186 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.186 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.186 06:44:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.186 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.186 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.186 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.186 06:44:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.186 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.186 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.186 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.186 06:44:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.186 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.187 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.187 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.187 06:44:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.187 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.187 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.187 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.187 06:44:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.187 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.187 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.187 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.187 06:44:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.187 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.187 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.187 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.187 06:44:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.187 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.187 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.187 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.187 06:44:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.187 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.187 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.187 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.187 06:44:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.187 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.187 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.187 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.187 06:44:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.187 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.187 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.187 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.187 06:44:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.187 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.187 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.187 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.187 06:44:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.187 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.187 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.187 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.187 06:44:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.187 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.187 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.187 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.187 06:44:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.187 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.187 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.187 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.187 06:44:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.187 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.187 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.187 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.187 06:44:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.187 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.187 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.187 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.187 06:44:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.187 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.187 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.187 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.187 06:44:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.187 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.187 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.187 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.187 06:44:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.187 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.187 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.187 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.187 06:44:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.187 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.187 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.187 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.187 06:44:38 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.187 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.187 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.187 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.187 06:44:38 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.187 06:44:38 -- setup/common.sh@33 -- # echo 0 00:05:34.187 06:44:38 -- setup/common.sh@33 -- # return 0 00:05:34.187 06:44:38 -- setup/hugepages.sh@99 -- # surp=0 00:05:34.187 06:44:38 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:34.187 06:44:38 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:34.187 06:44:38 -- setup/common.sh@18 -- # local node= 00:05:34.187 06:44:38 -- setup/common.sh@19 -- # local var val 00:05:34.187 06:44:38 -- setup/common.sh@20 -- # local mem_f mem 00:05:34.187 06:44:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:34.187 06:44:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:34.187 06:44:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:34.187 06:44:38 -- setup/common.sh@28 -- # mapfile -t mem 00:05:34.187 06:44:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:34.187 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.187 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.187 06:44:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7734564 kB' 'MemAvailable: 10528764 kB' 'Buffers: 3704 kB' 'Cached: 2996868 kB' 'SwapCached: 0 kB' 'Active: 456880 kB' 'Inactive: 2662240 kB' 'Active(anon): 129036 kB' 'Inactive(anon): 0 kB' 'Active(file): 327844 kB' 'Inactive(file): 2662240 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 120124 kB' 'Mapped: 50988 kB' 'Shmem: 10488 kB' 'KReclaimable: 82532 kB' 'Slab: 183232 kB' 'SReclaimable: 82532 kB' 'SUnreclaim: 100700 kB' 'KernelStack: 6784 kB' 'PageTables: 4564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 324356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 6090752 kB' 'DirectMap1G: 8388608 kB' 00:05:34.187 06:44:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.187 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.187 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.187 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.187 06:44:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.187 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.187 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.187 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.187 06:44:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.187 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.187 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.187 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.187 06:44:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.187 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.187 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.187 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.187 06:44:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.187 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.187 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.187 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.187 06:44:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.187 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.187 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.187 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.187 06:44:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.187 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.187 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.187 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.187 06:44:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.187 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.187 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.187 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.187 06:44:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.187 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.187 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.187 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.187 06:44:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.187 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.187 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.187 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.187 06:44:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.187 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.187 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.187 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.187 06:44:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.187 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.187 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.187 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.187 06:44:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.187 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.187 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.187 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.187 06:44:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.187 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.187 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.187 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.187 06:44:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.187 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.188 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.188 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.188 06:44:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.188 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.188 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.188 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.188 06:44:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.188 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.188 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.188 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.188 06:44:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.188 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.188 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.188 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.188 06:44:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.188 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.188 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.188 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.188 06:44:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.188 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.188 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.188 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.188 06:44:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.188 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.188 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.188 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.188 06:44:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.188 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.188 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.188 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.188 06:44:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.188 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.188 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.188 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.188 06:44:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.188 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.188 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.188 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.188 06:44:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.188 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.188 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.188 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.188 06:44:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.188 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.188 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.188 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.188 06:44:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.188 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.188 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.188 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.188 06:44:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.188 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.188 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.188 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.188 06:44:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.188 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.188 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.188 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.188 06:44:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.188 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.188 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.188 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.188 06:44:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.188 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.188 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.188 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.188 06:44:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.188 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.188 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.188 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.188 06:44:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.188 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.188 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.188 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.188 06:44:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.188 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.188 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.188 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.188 06:44:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.188 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.188 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.188 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.188 06:44:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.188 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.188 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.188 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.188 06:44:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.188 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.188 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.188 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.188 06:44:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.188 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.188 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.188 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.188 06:44:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.188 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.188 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.188 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.188 06:44:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.188 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.188 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.188 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.188 06:44:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.188 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.188 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.188 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.188 06:44:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.188 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.188 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.188 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.188 06:44:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.188 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.188 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.188 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.188 06:44:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.188 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.188 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.188 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.188 06:44:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.188 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.188 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.188 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.188 06:44:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.188 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.188 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.188 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.188 06:44:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.188 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.188 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.188 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.188 06:44:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.188 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.188 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.188 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.188 06:44:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.188 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.188 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.188 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.188 06:44:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.188 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.188 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.188 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.188 06:44:38 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.188 06:44:38 -- setup/common.sh@33 -- # echo 0 00:05:34.188 06:44:38 -- setup/common.sh@33 -- # return 0 00:05:34.188 06:44:38 -- setup/hugepages.sh@100 -- # resv=0 00:05:34.188 nr_hugepages=512 00:05:34.188 06:44:38 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:34.188 resv_hugepages=0 00:05:34.188 surplus_hugepages=0 00:05:34.188 anon_hugepages=0 00:05:34.188 06:44:38 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:34.188 06:44:38 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:34.188 06:44:38 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:34.188 06:44:38 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:34.188 06:44:38 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:34.188 06:44:38 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:34.188 06:44:38 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:34.188 06:44:38 -- setup/common.sh@18 -- # local node= 00:05:34.188 06:44:38 -- setup/common.sh@19 -- # local var val 00:05:34.188 06:44:38 -- setup/common.sh@20 -- # local mem_f mem 00:05:34.188 06:44:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:34.188 06:44:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:34.188 06:44:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:34.188 06:44:38 -- setup/common.sh@28 -- # mapfile -t mem 00:05:34.188 06:44:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:34.188 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.189 06:44:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7734564 kB' 'MemAvailable: 10528764 kB' 'Buffers: 3704 kB' 'Cached: 2996868 kB' 'SwapCached: 0 kB' 'Active: 456556 kB' 'Inactive: 2662240 kB' 'Active(anon): 128712 kB' 'Inactive(anon): 0 kB' 'Active(file): 327844 kB' 'Inactive(file): 2662240 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 120052 kB' 'Mapped: 50988 kB' 'Shmem: 10488 kB' 'KReclaimable: 82532 kB' 'Slab: 183236 kB' 'SReclaimable: 82532 kB' 'SUnreclaim: 100704 kB' 'KernelStack: 6768 kB' 'PageTables: 4512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 324356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55496 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 6090752 kB' 'DirectMap1G: 8388608 kB' 00:05:34.189 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.189 06:44:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.189 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.189 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.189 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.189 06:44:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.189 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.189 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.189 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.189 06:44:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.189 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.189 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.189 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.189 06:44:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.189 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.189 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.189 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.189 06:44:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.189 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.189 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.189 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.189 06:44:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.189 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.189 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.189 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.189 06:44:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.189 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.189 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.189 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.189 06:44:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.189 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.189 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.189 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.189 06:44:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.189 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.189 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.189 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.189 06:44:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.189 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.189 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.189 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.189 06:44:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.189 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.189 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.189 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.189 06:44:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.189 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.189 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.189 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.189 06:44:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.189 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.189 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.189 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.189 06:44:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.189 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.189 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.189 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.189 06:44:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.189 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.189 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.189 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.189 06:44:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.189 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.189 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.189 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.189 06:44:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.189 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.189 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.189 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.189 06:44:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.189 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.189 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.189 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.189 06:44:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.189 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.189 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.189 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.189 06:44:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.189 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.189 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.189 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.189 06:44:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.189 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.189 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.189 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.189 06:44:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.189 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.189 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.189 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.189 06:44:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.189 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.189 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.189 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.189 06:44:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.189 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.189 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.189 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.189 06:44:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.189 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.189 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.189 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.189 06:44:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.189 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.189 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.189 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.189 06:44:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.189 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.189 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.189 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.189 06:44:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.189 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.189 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.189 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.189 06:44:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.189 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.189 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.189 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.189 06:44:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.189 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.189 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.189 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.189 06:44:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.189 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.189 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.189 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.189 06:44:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.189 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.189 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.189 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.189 06:44:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.189 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.189 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.189 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.190 06:44:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.190 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.190 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.190 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.190 06:44:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.190 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.190 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.190 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.190 06:44:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.190 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.190 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.190 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.190 06:44:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.190 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.190 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.190 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.190 06:44:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.190 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.190 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.190 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.190 06:44:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.190 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.190 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.190 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.190 06:44:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.190 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.190 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.190 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.190 06:44:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.190 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.190 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.190 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.190 06:44:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.190 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.190 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.190 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.190 06:44:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.190 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.190 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.190 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.190 06:44:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.190 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.190 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.190 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.190 06:44:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.190 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.190 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.190 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.190 06:44:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.190 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.190 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.190 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.190 06:44:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.190 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.190 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.190 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.190 06:44:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.190 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.190 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.190 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.190 06:44:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.190 06:44:38 -- setup/common.sh@33 -- # echo 512 00:05:34.190 06:44:38 -- setup/common.sh@33 -- # return 0 00:05:34.190 06:44:38 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:34.190 06:44:38 -- setup/hugepages.sh@112 -- # get_nodes 00:05:34.190 06:44:38 -- setup/hugepages.sh@27 -- # local node 00:05:34.190 06:44:38 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:34.190 06:44:38 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:34.190 06:44:38 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:34.190 06:44:38 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:34.190 06:44:38 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:34.190 06:44:38 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:34.190 06:44:38 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:34.190 06:44:38 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:34.190 06:44:38 -- setup/common.sh@18 -- # local node=0 00:05:34.190 06:44:38 -- setup/common.sh@19 -- # local var val 00:05:34.190 06:44:38 -- setup/common.sh@20 -- # local mem_f mem 00:05:34.190 06:44:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:34.190 06:44:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:34.190 06:44:38 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:34.190 06:44:38 -- setup/common.sh@28 -- # mapfile -t mem 00:05:34.190 06:44:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:34.190 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.190 06:44:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7734564 kB' 'MemUsed: 4504544 kB' 'SwapCached: 0 kB' 'Active: 456860 kB' 'Inactive: 2662240 kB' 'Active(anon): 129016 kB' 'Inactive(anon): 0 kB' 'Active(file): 327844 kB' 'Inactive(file): 2662240 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'FilePages: 3000572 kB' 'Mapped: 50988 kB' 'AnonPages: 120160 kB' 'Shmem: 10488 kB' 'KernelStack: 6784 kB' 'PageTables: 4564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82532 kB' 'Slab: 183232 kB' 'SReclaimable: 82532 kB' 'SUnreclaim: 100700 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:34.190 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.190 06:44:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.190 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.190 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.190 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.190 06:44:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.190 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.190 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.190 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.190 06:44:38 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.190 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.190 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.190 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.190 06:44:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.190 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.190 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.190 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.190 06:44:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.190 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.190 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.190 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.190 06:44:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.190 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.190 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.190 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.190 06:44:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.190 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.190 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.190 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.190 06:44:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.190 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.190 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.190 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.190 06:44:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.190 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.190 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.190 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.191 06:44:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.191 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.191 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.191 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.191 06:44:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.191 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.191 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.191 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.191 06:44:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.191 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.191 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.191 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.191 06:44:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.191 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.191 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.191 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.191 06:44:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.191 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.191 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.191 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.191 06:44:38 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.191 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.191 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.191 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.191 06:44:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.191 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.191 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.191 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.191 06:44:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.191 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.191 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.191 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.191 06:44:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.191 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.191 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.191 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.191 06:44:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.191 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.191 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.191 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.191 06:44:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.191 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.191 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.191 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.191 06:44:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.191 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.191 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.191 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.191 06:44:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.191 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.191 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.191 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.191 06:44:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.191 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.191 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.191 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.191 06:44:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.191 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.191 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.191 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.191 06:44:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.191 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.191 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.191 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.191 06:44:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.191 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.191 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.191 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.191 06:44:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.191 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.191 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.191 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.191 06:44:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.191 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.191 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.191 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.191 06:44:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.191 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.191 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.191 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.191 06:44:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.191 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.191 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.191 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.191 06:44:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.191 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.191 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.191 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.191 06:44:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.191 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.191 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.191 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.191 06:44:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.191 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.191 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.191 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.191 06:44:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.191 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.191 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.191 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.191 06:44:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.191 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.191 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.191 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.191 06:44:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.191 06:44:38 -- setup/common.sh@32 -- # continue 00:05:34.191 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.191 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.191 06:44:38 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.191 06:44:38 -- setup/common.sh@33 -- # echo 0 00:05:34.191 06:44:38 -- setup/common.sh@33 -- # return 0 00:05:34.191 06:44:38 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:34.191 06:44:38 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:34.191 06:44:38 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:34.191 node0=512 expecting 512 00:05:34.191 ************************************ 00:05:34.191 END TEST per_node_1G_alloc 00:05:34.191 ************************************ 00:05:34.191 06:44:38 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:34.191 06:44:38 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:34.191 06:44:38 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:34.191 00:05:34.191 real 0m0.566s 00:05:34.191 user 0m0.297s 00:05:34.191 sys 0m0.279s 00:05:34.191 06:44:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:34.191 06:44:38 -- common/autotest_common.sh@10 -- # set +x 00:05:34.191 06:44:38 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:34.191 06:44:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:34.191 06:44:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:34.191 06:44:38 -- common/autotest_common.sh@10 -- # set +x 00:05:34.191 ************************************ 00:05:34.191 START TEST even_2G_alloc 00:05:34.191 ************************************ 00:05:34.191 06:44:38 -- common/autotest_common.sh@1114 -- # even_2G_alloc 00:05:34.191 06:44:38 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:34.191 06:44:38 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:34.191 06:44:38 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:34.191 06:44:38 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:34.191 06:44:38 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:34.191 06:44:38 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:34.191 06:44:38 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:34.191 06:44:38 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:34.191 06:44:38 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:34.191 06:44:38 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:34.191 06:44:38 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:34.191 06:44:38 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:34.191 06:44:38 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:34.191 06:44:38 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:34.191 06:44:38 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:34.191 06:44:38 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:05:34.191 06:44:38 -- setup/hugepages.sh@83 -- # : 0 00:05:34.191 06:44:38 -- setup/hugepages.sh@84 -- # : 0 00:05:34.191 06:44:38 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:34.191 06:44:38 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:34.191 06:44:38 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:34.191 06:44:38 -- setup/hugepages.sh@153 -- # setup output 00:05:34.191 06:44:38 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:34.191 06:44:38 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:34.450 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:34.713 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:34.713 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:34.713 06:44:39 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:34.713 06:44:39 -- setup/hugepages.sh@89 -- # local node 00:05:34.713 06:44:39 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:34.713 06:44:39 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:34.713 06:44:39 -- setup/hugepages.sh@92 -- # local surp 00:05:34.713 06:44:39 -- setup/hugepages.sh@93 -- # local resv 00:05:34.713 06:44:39 -- setup/hugepages.sh@94 -- # local anon 00:05:34.713 06:44:39 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:34.713 06:44:39 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:34.713 06:44:39 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:34.713 06:44:39 -- setup/common.sh@18 -- # local node= 00:05:34.713 06:44:39 -- setup/common.sh@19 -- # local var val 00:05:34.713 06:44:39 -- setup/common.sh@20 -- # local mem_f mem 00:05:34.713 06:44:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:34.713 06:44:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:34.713 06:44:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:34.713 06:44:39 -- setup/common.sh@28 -- # mapfile -t mem 00:05:34.713 06:44:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:34.713 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.713 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.713 06:44:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6682428 kB' 'MemAvailable: 9476628 kB' 'Buffers: 3704 kB' 'Cached: 2996868 kB' 'SwapCached: 0 kB' 'Active: 457208 kB' 'Inactive: 2662240 kB' 'Active(anon): 129364 kB' 'Inactive(anon): 0 kB' 'Active(file): 327844 kB' 'Inactive(file): 2662240 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 120492 kB' 'Mapped: 51180 kB' 'Shmem: 10488 kB' 'KReclaimable: 82532 kB' 'Slab: 183204 kB' 'SReclaimable: 82532 kB' 'SUnreclaim: 100672 kB' 'KernelStack: 6792 kB' 'PageTables: 4716 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 324356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 6090752 kB' 'DirectMap1G: 8388608 kB' 00:05:34.713 06:44:39 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.713 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.713 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.713 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.713 06:44:39 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.713 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.713 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.713 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.713 06:44:39 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.713 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.713 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.713 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.713 06:44:39 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.713 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.713 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.713 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.713 06:44:39 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.713 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.713 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.713 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.713 06:44:39 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.713 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.713 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.713 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.713 06:44:39 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.714 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.714 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.714 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.714 06:44:39 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.714 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.714 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.714 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.714 06:44:39 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.714 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.714 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.714 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.714 06:44:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.714 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.714 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.714 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.714 06:44:39 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.714 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.714 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.714 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.714 06:44:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.714 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.714 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.714 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.714 06:44:39 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.714 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.714 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.714 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.714 06:44:39 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.714 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.714 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.714 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.714 06:44:39 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.714 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.714 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.714 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.714 06:44:39 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.714 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.714 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.714 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.714 06:44:39 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.714 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.714 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.714 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.714 06:44:39 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.714 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.714 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.714 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.714 06:44:39 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.714 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.714 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.714 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.714 06:44:39 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.714 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.714 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.714 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.714 06:44:39 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.714 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.714 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.714 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.714 06:44:39 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.714 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.714 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.714 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.714 06:44:39 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.714 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.714 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.714 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.714 06:44:39 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.714 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.714 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.714 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.714 06:44:39 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.714 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.714 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.714 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.714 06:44:39 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.714 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.714 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.714 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.714 06:44:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.714 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.714 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.714 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.714 06:44:39 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.714 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.714 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.714 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.714 06:44:39 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.714 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.714 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.714 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.714 06:44:39 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.714 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.714 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.714 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.714 06:44:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.714 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.714 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.714 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.714 06:44:39 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.714 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.714 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.714 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.714 06:44:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.714 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.714 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.714 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.714 06:44:39 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.714 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.714 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.714 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.714 06:44:39 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.714 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.714 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.714 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.714 06:44:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.714 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.714 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.714 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.714 06:44:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.714 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.714 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.714 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.714 06:44:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.714 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.714 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.714 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.714 06:44:39 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.714 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.714 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.714 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.714 06:44:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.714 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.714 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.714 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.714 06:44:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.714 06:44:39 -- setup/common.sh@33 -- # echo 0 00:05:34.714 06:44:39 -- setup/common.sh@33 -- # return 0 00:05:34.714 06:44:39 -- setup/hugepages.sh@97 -- # anon=0 00:05:34.714 06:44:39 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:34.714 06:44:39 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:34.714 06:44:39 -- setup/common.sh@18 -- # local node= 00:05:34.714 06:44:39 -- setup/common.sh@19 -- # local var val 00:05:34.714 06:44:39 -- setup/common.sh@20 -- # local mem_f mem 00:05:34.714 06:44:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:34.714 06:44:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:34.714 06:44:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:34.714 06:44:39 -- setup/common.sh@28 -- # mapfile -t mem 00:05:34.714 06:44:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:34.714 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.715 06:44:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6681924 kB' 'MemAvailable: 9476124 kB' 'Buffers: 3704 kB' 'Cached: 2996868 kB' 'SwapCached: 0 kB' 'Active: 456536 kB' 'Inactive: 2662240 kB' 'Active(anon): 128692 kB' 'Inactive(anon): 0 kB' 'Active(file): 327844 kB' 'Inactive(file): 2662240 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 120036 kB' 'Mapped: 50988 kB' 'Shmem: 10488 kB' 'KReclaimable: 82532 kB' 'Slab: 183208 kB' 'SReclaimable: 82532 kB' 'SUnreclaim: 100676 kB' 'KernelStack: 6768 kB' 'PageTables: 4520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 324356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55496 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 6090752 kB' 'DirectMap1G: 8388608 kB' 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.715 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.715 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.716 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.716 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.716 06:44:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.716 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.716 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.716 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.716 06:44:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.716 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.716 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.716 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.716 06:44:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.716 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.716 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.716 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.716 06:44:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.716 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.716 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.716 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.716 06:44:39 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.716 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.716 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.716 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.716 06:44:39 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.716 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.716 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.716 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.716 06:44:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.716 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.716 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.716 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.716 06:44:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.716 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.716 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.716 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.716 06:44:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.716 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.716 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.716 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.716 06:44:39 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.716 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.716 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.716 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.716 06:44:39 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.716 06:44:39 -- setup/common.sh@33 -- # echo 0 00:05:34.716 06:44:39 -- setup/common.sh@33 -- # return 0 00:05:34.716 06:44:39 -- setup/hugepages.sh@99 -- # surp=0 00:05:34.716 06:44:39 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:34.716 06:44:39 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:34.716 06:44:39 -- setup/common.sh@18 -- # local node= 00:05:34.716 06:44:39 -- setup/common.sh@19 -- # local var val 00:05:34.716 06:44:39 -- setup/common.sh@20 -- # local mem_f mem 00:05:34.716 06:44:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:34.716 06:44:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:34.716 06:44:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:34.716 06:44:39 -- setup/common.sh@28 -- # mapfile -t mem 00:05:34.716 06:44:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:34.716 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.716 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.716 06:44:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6681924 kB' 'MemAvailable: 9476124 kB' 'Buffers: 3704 kB' 'Cached: 2996868 kB' 'SwapCached: 0 kB' 'Active: 456588 kB' 'Inactive: 2662240 kB' 'Active(anon): 128744 kB' 'Inactive(anon): 0 kB' 'Active(file): 327844 kB' 'Inactive(file): 2662240 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 120084 kB' 'Mapped: 50988 kB' 'Shmem: 10488 kB' 'KReclaimable: 82532 kB' 'Slab: 183200 kB' 'SReclaimable: 82532 kB' 'SUnreclaim: 100668 kB' 'KernelStack: 6768 kB' 'PageTables: 4520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 324356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55496 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 6090752 kB' 'DirectMap1G: 8388608 kB' 00:05:34.716 06:44:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.716 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.716 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.716 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.716 06:44:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.716 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.716 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.716 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.716 06:44:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.716 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.716 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.716 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.716 06:44:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.716 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.716 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.716 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.716 06:44:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.716 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.716 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.716 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.716 06:44:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.716 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.716 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.716 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.716 06:44:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.716 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.716 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.716 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.716 06:44:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.716 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.716 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.716 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.716 06:44:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.716 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.716 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.716 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.716 06:44:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.716 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.716 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.716 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.716 06:44:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.716 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.716 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.716 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.716 06:44:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.716 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.716 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.716 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.716 06:44:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.716 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.716 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.716 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.716 06:44:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.716 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.716 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.716 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.716 06:44:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.716 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.716 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.716 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.716 06:44:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.716 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.716 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.716 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.716 06:44:39 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.716 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.716 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.716 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.716 06:44:39 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.716 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.716 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.716 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.716 06:44:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.716 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.716 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.716 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.716 06:44:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.716 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.716 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.716 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.716 06:44:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.716 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.716 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.716 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.716 06:44:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.716 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.716 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.716 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.716 06:44:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.716 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.716 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.717 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.717 06:44:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.717 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.717 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.717 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.717 06:44:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.717 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.717 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.717 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.717 06:44:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.717 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.717 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.717 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.717 06:44:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.717 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.717 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.717 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.717 06:44:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.717 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.717 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.717 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.717 06:44:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.717 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.717 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.717 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.717 06:44:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.717 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.717 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.717 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.717 06:44:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.717 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.717 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.717 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.717 06:44:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.717 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.717 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.717 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.717 06:44:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.717 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.717 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.717 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.717 06:44:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.717 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.717 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.717 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.717 06:44:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.717 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.717 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.717 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.717 06:44:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.717 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.717 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.717 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.717 06:44:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.717 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.717 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.717 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.717 06:44:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.717 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.717 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.717 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.717 06:44:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.717 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.717 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.717 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.717 06:44:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.717 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.717 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.717 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.717 06:44:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.717 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.717 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.717 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.717 06:44:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.717 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.717 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.717 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.717 06:44:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.717 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.717 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.717 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.717 06:44:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.717 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.717 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.717 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.717 06:44:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.717 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.717 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.717 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.717 06:44:39 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.717 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.717 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.717 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.717 06:44:39 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.717 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.717 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.717 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.717 06:44:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.717 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.717 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.717 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.717 06:44:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.717 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.717 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.717 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.717 06:44:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.717 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.717 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.717 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.717 06:44:39 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.717 06:44:39 -- setup/common.sh@33 -- # echo 0 00:05:34.717 06:44:39 -- setup/common.sh@33 -- # return 0 00:05:34.717 06:44:39 -- setup/hugepages.sh@100 -- # resv=0 00:05:34.717 nr_hugepages=1024 00:05:34.717 06:44:39 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:34.717 resv_hugepages=0 00:05:34.717 06:44:39 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:34.717 surplus_hugepages=0 00:05:34.717 06:44:39 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:34.717 anon_hugepages=0 00:05:34.717 06:44:39 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:34.717 06:44:39 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:34.717 06:44:39 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:34.717 06:44:39 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:34.717 06:44:39 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:34.717 06:44:39 -- setup/common.sh@18 -- # local node= 00:05:34.717 06:44:39 -- setup/common.sh@19 -- # local var val 00:05:34.717 06:44:39 -- setup/common.sh@20 -- # local mem_f mem 00:05:34.717 06:44:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:34.717 06:44:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:34.717 06:44:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:34.717 06:44:39 -- setup/common.sh@28 -- # mapfile -t mem 00:05:34.717 06:44:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:34.717 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.717 06:44:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6681924 kB' 'MemAvailable: 9476124 kB' 'Buffers: 3704 kB' 'Cached: 2996868 kB' 'SwapCached: 0 kB' 'Active: 456584 kB' 'Inactive: 2662240 kB' 'Active(anon): 128740 kB' 'Inactive(anon): 0 kB' 'Active(file): 327844 kB' 'Inactive(file): 2662240 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 120108 kB' 'Mapped: 50988 kB' 'Shmem: 10488 kB' 'KReclaimable: 82532 kB' 'Slab: 183200 kB' 'SReclaimable: 82532 kB' 'SUnreclaim: 100668 kB' 'KernelStack: 6784 kB' 'PageTables: 4572 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 324356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55496 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 6090752 kB' 'DirectMap1G: 8388608 kB' 00:05:34.717 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.717 06:44:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.717 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.717 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.717 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.717 06:44:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.717 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.717 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.717 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.717 06:44:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.717 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.717 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.717 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.717 06:44:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.717 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.718 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.718 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.718 06:44:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.718 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.718 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.718 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.718 06:44:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.718 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.718 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.718 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.718 06:44:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.718 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.718 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.718 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.718 06:44:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.718 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.718 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.718 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.718 06:44:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.718 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.718 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.718 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.718 06:44:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.718 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.718 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.718 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.718 06:44:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.718 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.718 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.718 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.718 06:44:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.718 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.718 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.718 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.718 06:44:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.718 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.718 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.718 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.718 06:44:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.718 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.718 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.718 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.718 06:44:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.718 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.718 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.718 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.718 06:44:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.718 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.718 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.718 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.718 06:44:39 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.718 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.718 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.718 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.718 06:44:39 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.718 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.718 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.718 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.718 06:44:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.718 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.718 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.718 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.718 06:44:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.718 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.718 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.718 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.718 06:44:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.718 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.718 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.718 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.718 06:44:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.718 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.718 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.718 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.718 06:44:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.718 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.718 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.718 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.718 06:44:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.718 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.718 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.718 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.718 06:44:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.718 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.718 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.718 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.718 06:44:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.718 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.718 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.718 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.718 06:44:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.718 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.718 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.718 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.718 06:44:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.718 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.718 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.718 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.718 06:44:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.718 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.718 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.718 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.718 06:44:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.718 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.718 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.718 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.718 06:44:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.718 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.718 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.718 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.718 06:44:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.718 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.718 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.718 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.718 06:44:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.718 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.718 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.718 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.718 06:44:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.718 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.718 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.718 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.718 06:44:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.718 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.718 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.718 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.718 06:44:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.718 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.718 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.719 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.719 06:44:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.719 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.719 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.719 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.719 06:44:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.719 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.719 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.719 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.719 06:44:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.719 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.719 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.719 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.719 06:44:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.719 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.719 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.719 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.719 06:44:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.719 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.719 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.719 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.719 06:44:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.719 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.719 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.719 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.719 06:44:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.719 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.719 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.719 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.719 06:44:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.719 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.719 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.719 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.719 06:44:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.719 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.719 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.719 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.719 06:44:39 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.719 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.719 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.719 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.719 06:44:39 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.719 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.719 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.719 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.719 06:44:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.719 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.719 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.719 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.719 06:44:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.719 06:44:39 -- setup/common.sh@33 -- # echo 1024 00:05:34.719 06:44:39 -- setup/common.sh@33 -- # return 0 00:05:34.719 06:44:39 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:34.719 06:44:39 -- setup/hugepages.sh@112 -- # get_nodes 00:05:34.719 06:44:39 -- setup/hugepages.sh@27 -- # local node 00:05:34.719 06:44:39 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:34.719 06:44:39 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:34.719 06:44:39 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:34.719 06:44:39 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:34.719 06:44:39 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:34.719 06:44:39 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:34.719 06:44:39 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:34.719 06:44:39 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:34.719 06:44:39 -- setup/common.sh@18 -- # local node=0 00:05:34.719 06:44:39 -- setup/common.sh@19 -- # local var val 00:05:34.719 06:44:39 -- setup/common.sh@20 -- # local mem_f mem 00:05:34.719 06:44:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:34.719 06:44:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:34.719 06:44:39 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:34.719 06:44:39 -- setup/common.sh@28 -- # mapfile -t mem 00:05:34.719 06:44:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:34.719 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.719 06:44:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6681924 kB' 'MemUsed: 5557184 kB' 'SwapCached: 0 kB' 'Active: 456540 kB' 'Inactive: 2662240 kB' 'Active(anon): 128696 kB' 'Inactive(anon): 0 kB' 'Active(file): 327844 kB' 'Inactive(file): 2662240 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'FilePages: 3000572 kB' 'Mapped: 50988 kB' 'AnonPages: 120036 kB' 'Shmem: 10488 kB' 'KernelStack: 6752 kB' 'PageTables: 4468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82532 kB' 'Slab: 183200 kB' 'SReclaimable: 82532 kB' 'SUnreclaim: 100668 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:34.719 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.719 06:44:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.719 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.719 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.719 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.719 06:44:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.719 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.719 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.719 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.719 06:44:39 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.719 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.719 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.719 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.719 06:44:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.719 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.719 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.719 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.719 06:44:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.719 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.719 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.719 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.719 06:44:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.719 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.719 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.719 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.719 06:44:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.719 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.719 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.719 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.719 06:44:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.719 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.719 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.719 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.719 06:44:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.719 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.719 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.719 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.719 06:44:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.719 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.719 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.719 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.719 06:44:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.719 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.719 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.719 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.719 06:44:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.719 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.719 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.719 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.719 06:44:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.719 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.719 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.719 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.719 06:44:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.719 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.719 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.719 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.719 06:44:39 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.719 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.719 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.719 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.719 06:44:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.719 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.719 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.719 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.719 06:44:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.719 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.719 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.719 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.719 06:44:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.719 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.719 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.720 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.720 06:44:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.720 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.720 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.720 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.720 06:44:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.720 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.720 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.720 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.720 06:44:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.720 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.720 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.720 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.720 06:44:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.720 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.720 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.720 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.720 06:44:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.720 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.720 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.720 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.720 06:44:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.720 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.720 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.720 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.720 06:44:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.720 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.720 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.720 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.720 06:44:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.720 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.720 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.720 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.720 06:44:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.720 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.720 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.720 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.720 06:44:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.720 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.720 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.720 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.720 06:44:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.720 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.720 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.720 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.720 06:44:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.720 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.720 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.720 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.720 06:44:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.720 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.720 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.720 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.720 06:44:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.720 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.720 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.720 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.720 06:44:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.720 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.720 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.720 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.720 06:44:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.720 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.720 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.720 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.720 06:44:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.720 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.720 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.720 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.720 06:44:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.720 06:44:39 -- setup/common.sh@32 -- # continue 00:05:34.720 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.720 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.720 06:44:39 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.720 06:44:39 -- setup/common.sh@33 -- # echo 0 00:05:34.720 06:44:39 -- setup/common.sh@33 -- # return 0 00:05:34.720 06:44:39 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:34.720 06:44:39 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:34.720 06:44:39 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:34.720 06:44:39 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:34.720 node0=1024 expecting 1024 00:05:34.720 06:44:39 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:34.720 06:44:39 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:34.720 00:05:34.720 real 0m0.534s 00:05:34.720 user 0m0.269s 00:05:34.720 sys 0m0.294s 00:05:34.720 06:44:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:34.720 06:44:39 -- common/autotest_common.sh@10 -- # set +x 00:05:34.720 ************************************ 00:05:34.720 END TEST even_2G_alloc 00:05:34.720 ************************************ 00:05:34.720 06:44:39 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:34.720 06:44:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:34.720 06:44:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:34.720 06:44:39 -- common/autotest_common.sh@10 -- # set +x 00:05:34.720 ************************************ 00:05:34.720 START TEST odd_alloc 00:05:34.720 ************************************ 00:05:34.720 06:44:39 -- common/autotest_common.sh@1114 -- # odd_alloc 00:05:34.720 06:44:39 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:34.720 06:44:39 -- setup/hugepages.sh@49 -- # local size=2098176 00:05:34.720 06:44:39 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:34.720 06:44:39 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:34.720 06:44:39 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:34.720 06:44:39 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:34.720 06:44:39 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:34.720 06:44:39 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:34.720 06:44:39 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:34.720 06:44:39 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:34.720 06:44:39 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:34.720 06:44:39 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:34.720 06:44:39 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:34.720 06:44:39 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:34.720 06:44:39 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:34.720 06:44:39 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:05:34.720 06:44:39 -- setup/hugepages.sh@83 -- # : 0 00:05:34.720 06:44:39 -- setup/hugepages.sh@84 -- # : 0 00:05:34.720 06:44:39 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:34.720 06:44:39 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:34.720 06:44:39 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:34.720 06:44:39 -- setup/hugepages.sh@160 -- # setup output 00:05:34.720 06:44:39 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:34.720 06:44:39 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:35.292 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:35.292 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:35.292 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:35.292 06:44:39 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:35.292 06:44:39 -- setup/hugepages.sh@89 -- # local node 00:05:35.292 06:44:39 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:35.292 06:44:39 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:35.292 06:44:39 -- setup/hugepages.sh@92 -- # local surp 00:05:35.292 06:44:39 -- setup/hugepages.sh@93 -- # local resv 00:05:35.292 06:44:39 -- setup/hugepages.sh@94 -- # local anon 00:05:35.292 06:44:39 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:35.292 06:44:39 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:35.292 06:44:39 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:35.292 06:44:39 -- setup/common.sh@18 -- # local node= 00:05:35.292 06:44:39 -- setup/common.sh@19 -- # local var val 00:05:35.292 06:44:39 -- setup/common.sh@20 -- # local mem_f mem 00:05:35.292 06:44:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:35.292 06:44:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:35.292 06:44:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:35.292 06:44:39 -- setup/common.sh@28 -- # mapfile -t mem 00:05:35.292 06:44:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:35.292 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.292 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.292 06:44:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6675812 kB' 'MemAvailable: 9470012 kB' 'Buffers: 3704 kB' 'Cached: 2996868 kB' 'SwapCached: 0 kB' 'Active: 456816 kB' 'Inactive: 2662240 kB' 'Active(anon): 128972 kB' 'Inactive(anon): 0 kB' 'Active(file): 327844 kB' 'Inactive(file): 2662240 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 120312 kB' 'Mapped: 51260 kB' 'Shmem: 10488 kB' 'KReclaimable: 82532 kB' 'Slab: 183228 kB' 'SReclaimable: 82532 kB' 'SUnreclaim: 100696 kB' 'KernelStack: 6744 kB' 'PageTables: 4344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458556 kB' 'Committed_AS: 324356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 6090752 kB' 'DirectMap1G: 8388608 kB' 00:05:35.292 06:44:39 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.292 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.292 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.292 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.292 06:44:39 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.292 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.292 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.292 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.292 06:44:39 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.292 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.292 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.292 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.292 06:44:39 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.292 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.292 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.292 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.292 06:44:39 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.292 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.292 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.292 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.292 06:44:39 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.292 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.292 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.292 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.292 06:44:39 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.292 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.292 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.292 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.292 06:44:39 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.292 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.292 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.292 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.292 06:44:39 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.292 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.292 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.292 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.292 06:44:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.292 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.292 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.292 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.292 06:44:39 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.292 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.292 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.292 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.292 06:44:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.292 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.292 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.292 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.292 06:44:39 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.292 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.292 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.292 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.292 06:44:39 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.292 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.292 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.292 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.292 06:44:39 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.292 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.292 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.292 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.292 06:44:39 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.292 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.292 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.293 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.293 06:44:39 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.293 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.293 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.293 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.293 06:44:39 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.293 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.293 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.293 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.293 06:44:39 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.293 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.293 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.293 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.293 06:44:39 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.293 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.293 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.293 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.293 06:44:39 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.293 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.293 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.293 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.293 06:44:39 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.293 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.293 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.293 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.293 06:44:39 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.293 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.293 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.293 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.293 06:44:39 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.293 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.293 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.293 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.293 06:44:39 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.293 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.293 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.293 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.293 06:44:39 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.293 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.293 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.293 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.293 06:44:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.293 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.293 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.293 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.293 06:44:39 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.293 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.293 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.293 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.293 06:44:39 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.293 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.293 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.293 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.293 06:44:39 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.293 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.293 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.293 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.293 06:44:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.293 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.293 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.293 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.293 06:44:39 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.293 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.293 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.293 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.293 06:44:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.293 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.293 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.293 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.293 06:44:39 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.293 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.293 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.293 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.293 06:44:39 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.293 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.293 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.293 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.293 06:44:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.293 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.293 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.293 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.293 06:44:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.293 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.293 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.293 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.293 06:44:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.293 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.293 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.293 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.293 06:44:39 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.293 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.293 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.293 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.293 06:44:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.293 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.293 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.293 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.293 06:44:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.293 06:44:39 -- setup/common.sh@33 -- # echo 0 00:05:35.293 06:44:39 -- setup/common.sh@33 -- # return 0 00:05:35.293 06:44:39 -- setup/hugepages.sh@97 -- # anon=0 00:05:35.293 06:44:39 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:35.293 06:44:39 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:35.293 06:44:39 -- setup/common.sh@18 -- # local node= 00:05:35.293 06:44:39 -- setup/common.sh@19 -- # local var val 00:05:35.293 06:44:39 -- setup/common.sh@20 -- # local mem_f mem 00:05:35.293 06:44:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:35.293 06:44:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:35.293 06:44:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:35.293 06:44:39 -- setup/common.sh@28 -- # mapfile -t mem 00:05:35.293 06:44:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:35.293 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.293 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.293 06:44:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6675812 kB' 'MemAvailable: 9470012 kB' 'Buffers: 3704 kB' 'Cached: 2996868 kB' 'SwapCached: 0 kB' 'Active: 456672 kB' 'Inactive: 2662240 kB' 'Active(anon): 128828 kB' 'Inactive(anon): 0 kB' 'Active(file): 327844 kB' 'Inactive(file): 2662240 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 120176 kB' 'Mapped: 51152 kB' 'Shmem: 10488 kB' 'KReclaimable: 82532 kB' 'Slab: 183240 kB' 'SReclaimable: 82532 kB' 'SUnreclaim: 100708 kB' 'KernelStack: 6760 kB' 'PageTables: 4396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458556 kB' 'Committed_AS: 324356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55528 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 6090752 kB' 'DirectMap1G: 8388608 kB' 00:05:35.293 06:44:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.293 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.293 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.293 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.293 06:44:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.293 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.293 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.293 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.293 06:44:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.293 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.293 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.293 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.293 06:44:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.293 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.293 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.293 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.293 06:44:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.293 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.293 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.293 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.293 06:44:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.293 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.293 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.293 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.293 06:44:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.293 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.293 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.293 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.293 06:44:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.293 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.293 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.293 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.293 06:44:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.293 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.293 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.293 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.293 06:44:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.293 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.294 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.294 06:44:39 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.294 06:44:39 -- setup/common.sh@33 -- # echo 0 00:05:35.295 06:44:39 -- setup/common.sh@33 -- # return 0 00:05:35.295 06:44:39 -- setup/hugepages.sh@99 -- # surp=0 00:05:35.295 06:44:39 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:35.295 06:44:39 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:35.295 06:44:39 -- setup/common.sh@18 -- # local node= 00:05:35.295 06:44:39 -- setup/common.sh@19 -- # local var val 00:05:35.295 06:44:39 -- setup/common.sh@20 -- # local mem_f mem 00:05:35.295 06:44:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:35.295 06:44:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:35.295 06:44:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:35.295 06:44:39 -- setup/common.sh@28 -- # mapfile -t mem 00:05:35.295 06:44:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:35.295 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.295 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.295 06:44:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6675812 kB' 'MemAvailable: 9470012 kB' 'Buffers: 3704 kB' 'Cached: 2996868 kB' 'SwapCached: 0 kB' 'Active: 456596 kB' 'Inactive: 2662240 kB' 'Active(anon): 128752 kB' 'Inactive(anon): 0 kB' 'Active(file): 327844 kB' 'Inactive(file): 2662240 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 120132 kB' 'Mapped: 50988 kB' 'Shmem: 10488 kB' 'KReclaimable: 82532 kB' 'Slab: 183240 kB' 'SReclaimable: 82532 kB' 'SUnreclaim: 100708 kB' 'KernelStack: 6784 kB' 'PageTables: 4580 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458556 kB' 'Committed_AS: 324356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 6090752 kB' 'DirectMap1G: 8388608 kB' 00:05:35.295 06:44:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.295 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.295 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.295 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.295 06:44:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.295 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.295 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.295 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.295 06:44:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.295 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.295 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.295 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.295 06:44:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.295 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.295 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.295 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.295 06:44:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.295 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.295 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.295 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.295 06:44:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.295 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.295 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.295 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.295 06:44:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.295 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.295 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.295 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.295 06:44:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.295 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.295 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.295 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.295 06:44:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.295 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.295 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.295 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.295 06:44:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.295 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.295 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.295 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.295 06:44:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.295 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.295 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.295 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.295 06:44:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.295 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.295 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.295 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.295 06:44:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.295 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.295 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.295 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.295 06:44:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.295 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.295 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.295 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.295 06:44:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.295 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.295 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.295 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.295 06:44:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.295 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.295 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.295 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.295 06:44:39 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.295 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.295 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.295 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.295 06:44:39 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.295 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.295 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.295 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.295 06:44:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.295 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.295 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.295 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.295 06:44:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.295 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.295 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.295 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.295 06:44:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.295 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.295 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.295 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.295 06:44:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.295 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.295 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.295 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.295 06:44:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.295 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.295 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.295 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.295 06:44:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.295 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.295 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.295 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.295 06:44:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.295 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.295 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.295 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.295 06:44:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.295 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.295 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.295 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.295 06:44:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.295 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.295 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.295 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.295 06:44:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.295 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.295 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.295 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.295 06:44:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.295 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.295 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.295 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.295 06:44:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.295 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.295 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.295 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.295 06:44:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.295 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.295 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.295 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.295 06:44:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.295 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.295 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.295 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.295 06:44:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.295 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.295 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.295 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.295 06:44:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.295 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.295 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.295 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.295 06:44:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.296 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.296 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.296 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.296 06:44:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.296 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.296 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.296 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.296 06:44:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.296 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.296 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.296 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.296 06:44:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.296 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.296 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.296 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.296 06:44:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.296 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.296 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.296 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.296 06:44:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.296 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.296 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.296 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.296 06:44:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.296 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.296 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.296 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.296 06:44:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.296 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.296 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.296 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.296 06:44:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.296 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.296 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.296 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.296 06:44:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.296 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.296 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.296 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.296 06:44:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.296 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.296 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.296 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.296 06:44:39 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.296 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.296 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.296 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.296 06:44:39 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.296 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.296 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.296 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.296 06:44:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.296 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.296 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.296 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.296 06:44:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.296 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.296 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.296 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.296 06:44:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.296 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.296 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.296 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.296 06:44:39 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.296 06:44:39 -- setup/common.sh@33 -- # echo 0 00:05:35.296 06:44:39 -- setup/common.sh@33 -- # return 0 00:05:35.296 06:44:39 -- setup/hugepages.sh@100 -- # resv=0 00:05:35.296 nr_hugepages=1025 00:05:35.296 06:44:39 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:35.296 resv_hugepages=0 00:05:35.296 06:44:39 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:35.296 surplus_hugepages=0 00:05:35.296 06:44:39 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:35.296 anon_hugepages=0 00:05:35.296 06:44:39 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:35.296 06:44:39 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:35.296 06:44:39 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:35.296 06:44:39 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:35.296 06:44:39 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:35.296 06:44:39 -- setup/common.sh@18 -- # local node= 00:05:35.296 06:44:39 -- setup/common.sh@19 -- # local var val 00:05:35.296 06:44:39 -- setup/common.sh@20 -- # local mem_f mem 00:05:35.296 06:44:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:35.296 06:44:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:35.296 06:44:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:35.296 06:44:39 -- setup/common.sh@28 -- # mapfile -t mem 00:05:35.296 06:44:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:35.296 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.296 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.296 06:44:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6675812 kB' 'MemAvailable: 9470012 kB' 'Buffers: 3704 kB' 'Cached: 2996868 kB' 'SwapCached: 0 kB' 'Active: 456616 kB' 'Inactive: 2662240 kB' 'Active(anon): 128772 kB' 'Inactive(anon): 0 kB' 'Active(file): 327844 kB' 'Inactive(file): 2662240 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 120136 kB' 'Mapped: 50988 kB' 'Shmem: 10488 kB' 'KReclaimable: 82532 kB' 'Slab: 183240 kB' 'SReclaimable: 82532 kB' 'SUnreclaim: 100708 kB' 'KernelStack: 6784 kB' 'PageTables: 4580 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458556 kB' 'Committed_AS: 324356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55528 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 6090752 kB' 'DirectMap1G: 8388608 kB' 00:05:35.296 06:44:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.296 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.296 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.296 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.296 06:44:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.296 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.296 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.296 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.296 06:44:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.296 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.296 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.296 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.296 06:44:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.296 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.296 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.296 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.296 06:44:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.296 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.296 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.296 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.296 06:44:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.296 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.296 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.296 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.296 06:44:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.296 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.296 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.296 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.296 06:44:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.296 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.296 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.296 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.296 06:44:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.296 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.296 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.296 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.296 06:44:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.296 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.296 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.297 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.297 06:44:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.297 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.297 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.297 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.297 06:44:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.297 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.297 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.297 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.297 06:44:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.297 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.297 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.297 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.297 06:44:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.297 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.297 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.297 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.297 06:44:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.297 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.297 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.297 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.297 06:44:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.297 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.297 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.297 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.297 06:44:39 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.297 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.297 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.297 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.297 06:44:39 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.297 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.297 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.297 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.297 06:44:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.297 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.297 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.297 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.297 06:44:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.297 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.297 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.297 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.297 06:44:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.297 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.297 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.297 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.297 06:44:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.297 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.297 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.297 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.297 06:44:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.297 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.297 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.297 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.297 06:44:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.297 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.297 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.297 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.297 06:44:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.297 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.297 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.297 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.297 06:44:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.297 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.297 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.297 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.297 06:44:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.297 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.297 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.297 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.297 06:44:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.297 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.297 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.297 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.297 06:44:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.297 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.297 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.297 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.297 06:44:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.297 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.297 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.297 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.297 06:44:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.297 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.297 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.297 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.297 06:44:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.297 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.297 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.297 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.297 06:44:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.297 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.297 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.297 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.297 06:44:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.297 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.297 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.297 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.297 06:44:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.297 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.297 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.297 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.297 06:44:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.297 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.297 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.297 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.297 06:44:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.297 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.297 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.297 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.297 06:44:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.297 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.297 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.297 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.297 06:44:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.297 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.297 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.297 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.297 06:44:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.297 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.297 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.297 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.297 06:44:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.297 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.297 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.297 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.297 06:44:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.297 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.297 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.297 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.297 06:44:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.297 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.297 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.297 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.297 06:44:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.297 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.297 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.297 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.297 06:44:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.297 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.297 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.297 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.297 06:44:39 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.297 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.297 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.297 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.297 06:44:39 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.297 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.297 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.297 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.297 06:44:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.297 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.297 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.297 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.297 06:44:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.297 06:44:39 -- setup/common.sh@33 -- # echo 1025 00:05:35.297 06:44:39 -- setup/common.sh@33 -- # return 0 00:05:35.297 06:44:39 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:35.297 06:44:39 -- setup/hugepages.sh@112 -- # get_nodes 00:05:35.297 06:44:39 -- setup/hugepages.sh@27 -- # local node 00:05:35.297 06:44:39 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:35.297 06:44:39 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:05:35.297 06:44:39 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:35.297 06:44:39 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:35.297 06:44:39 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:35.297 06:44:39 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:35.297 06:44:39 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:35.297 06:44:39 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:35.297 06:44:39 -- setup/common.sh@18 -- # local node=0 00:05:35.298 06:44:39 -- setup/common.sh@19 -- # local var val 00:05:35.298 06:44:39 -- setup/common.sh@20 -- # local mem_f mem 00:05:35.298 06:44:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:35.298 06:44:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:35.298 06:44:39 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:35.298 06:44:39 -- setup/common.sh@28 -- # mapfile -t mem 00:05:35.298 06:44:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:35.298 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.298 06:44:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6676332 kB' 'MemUsed: 5562776 kB' 'SwapCached: 0 kB' 'Active: 456596 kB' 'Inactive: 2662240 kB' 'Active(anon): 128752 kB' 'Inactive(anon): 0 kB' 'Active(file): 327844 kB' 'Inactive(file): 2662240 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 3000572 kB' 'Mapped: 50988 kB' 'AnonPages: 120144 kB' 'Shmem: 10488 kB' 'KernelStack: 6784 kB' 'PageTables: 4580 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82532 kB' 'Slab: 183240 kB' 'SReclaimable: 82532 kB' 'SUnreclaim: 100708 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:05:35.298 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.298 06:44:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.298 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.298 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.298 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.298 06:44:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.298 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.298 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.298 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.298 06:44:39 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.298 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.298 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.298 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.298 06:44:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.298 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.298 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.298 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.298 06:44:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.298 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.298 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.298 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.298 06:44:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.298 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.298 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.298 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.298 06:44:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.298 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.298 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.298 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.298 06:44:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.298 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.298 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.298 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.298 06:44:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.298 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.298 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.298 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.298 06:44:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.298 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.298 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.298 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.298 06:44:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.298 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.298 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.298 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.298 06:44:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.298 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.298 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.298 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.298 06:44:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.298 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.298 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.298 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.298 06:44:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.298 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.298 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.298 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.298 06:44:39 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.298 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.298 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.298 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.298 06:44:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.298 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.298 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.298 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.298 06:44:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.298 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.298 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.298 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.298 06:44:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.298 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.298 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.298 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.298 06:44:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.298 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.298 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.298 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.298 06:44:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.298 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.298 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.298 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.298 06:44:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.298 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.298 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.298 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.298 06:44:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.298 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.298 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.298 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.298 06:44:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.298 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.298 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.298 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.298 06:44:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.298 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.298 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.298 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.298 06:44:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.298 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.298 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.298 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.298 06:44:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.298 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.298 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.298 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.298 06:44:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.298 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.298 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.298 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.298 06:44:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.298 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.298 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.298 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.298 06:44:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.298 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.298 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.298 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.298 06:44:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.298 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.298 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.298 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.298 06:44:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.298 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.298 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.298 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.298 06:44:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.298 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.298 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.298 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.298 06:44:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.298 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.298 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.298 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.298 06:44:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.298 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.298 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.298 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.298 06:44:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.298 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.298 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.298 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.298 06:44:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.298 06:44:39 -- setup/common.sh@32 -- # continue 00:05:35.298 06:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.298 06:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.298 06:44:39 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.298 06:44:39 -- setup/common.sh@33 -- # echo 0 00:05:35.298 06:44:39 -- setup/common.sh@33 -- # return 0 00:05:35.299 06:44:39 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:35.299 06:44:39 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:35.299 06:44:39 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:35.299 06:44:39 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:35.299 node0=1025 expecting 1025 00:05:35.299 06:44:39 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:05:35.299 06:44:39 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:05:35.299 00:05:35.299 real 0m0.531s 00:05:35.299 user 0m0.289s 00:05:35.299 sys 0m0.275s 00:05:35.299 06:44:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:35.299 06:44:39 -- common/autotest_common.sh@10 -- # set +x 00:05:35.299 ************************************ 00:05:35.299 END TEST odd_alloc 00:05:35.299 ************************************ 00:05:35.299 06:44:39 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:35.299 06:44:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:35.299 06:44:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:35.299 06:44:39 -- common/autotest_common.sh@10 -- # set +x 00:05:35.299 ************************************ 00:05:35.299 START TEST custom_alloc 00:05:35.299 ************************************ 00:05:35.299 06:44:39 -- common/autotest_common.sh@1114 -- # custom_alloc 00:05:35.299 06:44:39 -- setup/hugepages.sh@167 -- # local IFS=, 00:05:35.299 06:44:39 -- setup/hugepages.sh@169 -- # local node 00:05:35.299 06:44:39 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:35.299 06:44:39 -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:35.299 06:44:39 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:35.299 06:44:39 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:35.299 06:44:39 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:35.299 06:44:39 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:35.299 06:44:39 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:35.299 06:44:39 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:35.299 06:44:39 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:35.299 06:44:39 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:35.299 06:44:39 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:35.299 06:44:39 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:35.299 06:44:39 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:35.299 06:44:39 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:35.299 06:44:39 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:35.299 06:44:39 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:35.299 06:44:39 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:35.299 06:44:39 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:35.299 06:44:39 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:35.299 06:44:39 -- setup/hugepages.sh@83 -- # : 0 00:05:35.299 06:44:39 -- setup/hugepages.sh@84 -- # : 0 00:05:35.299 06:44:39 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:35.299 06:44:39 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:35.299 06:44:39 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:05:35.299 06:44:39 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:35.299 06:44:39 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:35.299 06:44:39 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:35.299 06:44:39 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:35.299 06:44:39 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:35.299 06:44:39 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:35.299 06:44:39 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:35.299 06:44:39 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:35.299 06:44:39 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:35.299 06:44:39 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:35.299 06:44:39 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:35.299 06:44:39 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:35.299 06:44:39 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:35.299 06:44:39 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:35.299 06:44:39 -- setup/hugepages.sh@78 -- # return 0 00:05:35.299 06:44:39 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:05:35.299 06:44:39 -- setup/hugepages.sh@187 -- # setup output 00:05:35.299 06:44:39 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:35.299 06:44:39 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:35.871 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:35.871 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:35.871 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:35.871 06:44:40 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:05:35.871 06:44:40 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:35.871 06:44:40 -- setup/hugepages.sh@89 -- # local node 00:05:35.871 06:44:40 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:35.871 06:44:40 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:35.871 06:44:40 -- setup/hugepages.sh@92 -- # local surp 00:05:35.871 06:44:40 -- setup/hugepages.sh@93 -- # local resv 00:05:35.871 06:44:40 -- setup/hugepages.sh@94 -- # local anon 00:05:35.871 06:44:40 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:35.871 06:44:40 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:35.871 06:44:40 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:35.871 06:44:40 -- setup/common.sh@18 -- # local node= 00:05:35.871 06:44:40 -- setup/common.sh@19 -- # local var val 00:05:35.871 06:44:40 -- setup/common.sh@20 -- # local mem_f mem 00:05:35.871 06:44:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:35.871 06:44:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:35.871 06:44:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:35.871 06:44:40 -- setup/common.sh@28 -- # mapfile -t mem 00:05:35.871 06:44:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:35.871 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.871 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.871 06:44:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7729272 kB' 'MemAvailable: 10523472 kB' 'Buffers: 3704 kB' 'Cached: 2996868 kB' 'SwapCached: 0 kB' 'Active: 456896 kB' 'Inactive: 2662240 kB' 'Active(anon): 129052 kB' 'Inactive(anon): 0 kB' 'Active(file): 327844 kB' 'Inactive(file): 2662240 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120152 kB' 'Mapped: 51104 kB' 'Shmem: 10488 kB' 'KReclaimable: 82532 kB' 'Slab: 183260 kB' 'SReclaimable: 82532 kB' 'SUnreclaim: 100728 kB' 'KernelStack: 6776 kB' 'PageTables: 4668 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 324356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55528 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 6090752 kB' 'DirectMap1G: 8388608 kB' 00:05:35.871 06:44:40 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.871 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.871 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.871 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.871 06:44:40 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.871 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.871 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.871 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.871 06:44:40 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.871 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.871 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.871 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.871 06:44:40 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.871 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.871 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.871 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.871 06:44:40 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.871 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.871 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.871 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.871 06:44:40 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.871 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.871 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.871 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.871 06:44:40 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.871 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.871 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.871 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.871 06:44:40 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.871 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.871 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.871 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.871 06:44:40 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.871 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.871 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.871 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.871 06:44:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.871 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.871 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.871 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.871 06:44:40 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.871 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.871 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.871 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.871 06:44:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.871 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.871 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.871 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.871 06:44:40 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.871 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.871 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.871 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.871 06:44:40 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.871 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.871 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.871 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.871 06:44:40 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.871 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.871 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.871 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.871 06:44:40 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.871 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.871 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.871 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.871 06:44:40 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.871 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.871 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.871 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.871 06:44:40 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.871 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.871 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.871 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.871 06:44:40 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.871 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.871 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.871 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.871 06:44:40 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.871 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.871 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.871 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.871 06:44:40 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.871 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.871 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.871 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.871 06:44:40 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.871 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.871 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.871 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.871 06:44:40 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.871 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.871 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.871 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.871 06:44:40 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.871 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.871 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.871 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.871 06:44:40 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.871 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.871 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.871 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.871 06:44:40 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.871 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.871 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.871 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.872 06:44:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.872 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.872 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.872 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.872 06:44:40 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.872 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.872 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.872 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.872 06:44:40 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.872 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.872 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.872 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.872 06:44:40 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.872 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.872 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.872 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.872 06:44:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.872 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.872 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.872 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.872 06:44:40 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.872 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.872 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.872 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.872 06:44:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.872 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.872 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.872 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.872 06:44:40 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.872 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.872 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.872 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.872 06:44:40 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.872 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.872 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.872 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.872 06:44:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.872 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.872 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.872 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.872 06:44:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.872 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.872 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.872 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.872 06:44:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.872 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.872 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.872 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.872 06:44:40 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.872 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.872 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.872 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.872 06:44:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.872 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.872 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.872 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.872 06:44:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.872 06:44:40 -- setup/common.sh@33 -- # echo 0 00:05:35.872 06:44:40 -- setup/common.sh@33 -- # return 0 00:05:35.872 06:44:40 -- setup/hugepages.sh@97 -- # anon=0 00:05:35.872 06:44:40 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:35.872 06:44:40 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:35.872 06:44:40 -- setup/common.sh@18 -- # local node= 00:05:35.872 06:44:40 -- setup/common.sh@19 -- # local var val 00:05:35.872 06:44:40 -- setup/common.sh@20 -- # local mem_f mem 00:05:35.872 06:44:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:35.872 06:44:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:35.872 06:44:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:35.872 06:44:40 -- setup/common.sh@28 -- # mapfile -t mem 00:05:35.872 06:44:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:35.872 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.872 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.872 06:44:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7729272 kB' 'MemAvailable: 10523472 kB' 'Buffers: 3704 kB' 'Cached: 2996868 kB' 'SwapCached: 0 kB' 'Active: 456624 kB' 'Inactive: 2662240 kB' 'Active(anon): 128780 kB' 'Inactive(anon): 0 kB' 'Active(file): 327844 kB' 'Inactive(file): 2662240 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120124 kB' 'Mapped: 51104 kB' 'Shmem: 10488 kB' 'KReclaimable: 82532 kB' 'Slab: 183260 kB' 'SReclaimable: 82532 kB' 'SUnreclaim: 100728 kB' 'KernelStack: 6728 kB' 'PageTables: 4516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 324356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55496 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 6090752 kB' 'DirectMap1G: 8388608 kB' 00:05:35.872 06:44:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.872 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.872 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.872 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.872 06:44:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.872 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.872 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.872 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.872 06:44:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.872 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.872 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.872 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.872 06:44:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.872 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.872 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.872 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.872 06:44:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.872 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.872 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.872 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.872 06:44:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.872 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.872 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.872 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.872 06:44:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.872 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.872 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.872 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.872 06:44:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.872 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.872 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.872 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.872 06:44:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.872 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.872 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.872 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.872 06:44:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.872 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.872 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.872 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.872 06:44:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.872 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.872 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.872 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.872 06:44:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.872 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.872 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.872 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.872 06:44:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.872 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.872 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.872 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.872 06:44:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.872 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.872 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.872 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.872 06:44:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.872 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.872 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.872 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.872 06:44:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.872 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.872 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.872 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.872 06:44:40 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.872 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.872 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.872 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.872 06:44:40 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.872 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.872 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.872 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.872 06:44:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.872 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.872 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.872 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.872 06:44:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.872 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.872 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.873 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.873 06:44:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.873 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.873 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.873 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.873 06:44:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.873 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.873 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.873 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.873 06:44:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.873 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.873 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.873 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.873 06:44:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.873 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.873 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.873 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.873 06:44:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.873 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.873 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.873 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.873 06:44:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.873 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.873 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.873 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.873 06:44:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.873 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.873 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.873 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.873 06:44:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.873 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.873 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.873 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.873 06:44:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.873 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.873 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.873 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.873 06:44:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.873 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.873 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.873 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.873 06:44:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.873 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.873 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.873 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.873 06:44:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.873 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.873 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.873 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.873 06:44:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.873 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.873 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.873 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.873 06:44:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.873 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.873 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.873 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.873 06:44:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.873 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.873 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.873 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.873 06:44:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.873 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.873 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.873 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.873 06:44:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.873 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.873 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.873 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.873 06:44:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.873 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.873 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.873 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.873 06:44:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.873 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.873 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.873 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.873 06:44:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.873 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.873 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.873 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.873 06:44:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.873 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.873 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.873 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.873 06:44:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.873 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.873 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.873 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.873 06:44:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.873 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.873 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.873 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.873 06:44:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.873 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.873 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.873 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.873 06:44:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.873 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.873 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.873 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.873 06:44:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.873 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.873 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.873 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.873 06:44:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.873 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.873 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.873 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.873 06:44:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.873 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.873 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.873 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.873 06:44:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.873 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.873 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.873 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.873 06:44:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.873 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.873 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.873 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.873 06:44:40 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.873 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.873 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.873 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.873 06:44:40 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.873 06:44:40 -- setup/common.sh@33 -- # echo 0 00:05:35.873 06:44:40 -- setup/common.sh@33 -- # return 0 00:05:35.873 06:44:40 -- setup/hugepages.sh@99 -- # surp=0 00:05:35.873 06:44:40 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:35.873 06:44:40 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:35.873 06:44:40 -- setup/common.sh@18 -- # local node= 00:05:35.873 06:44:40 -- setup/common.sh@19 -- # local var val 00:05:35.873 06:44:40 -- setup/common.sh@20 -- # local mem_f mem 00:05:35.873 06:44:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:35.873 06:44:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:35.873 06:44:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:35.873 06:44:40 -- setup/common.sh@28 -- # mapfile -t mem 00:05:35.873 06:44:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:35.873 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.873 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.873 06:44:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7729272 kB' 'MemAvailable: 10523472 kB' 'Buffers: 3704 kB' 'Cached: 2996872 kB' 'SwapCached: 0 kB' 'Active: 456540 kB' 'Inactive: 2662240 kB' 'Active(anon): 128696 kB' 'Inactive(anon): 0 kB' 'Active(file): 327844 kB' 'Inactive(file): 2662240 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120040 kB' 'Mapped: 50988 kB' 'Shmem: 10488 kB' 'KReclaimable: 82532 kB' 'Slab: 183248 kB' 'SReclaimable: 82532 kB' 'SUnreclaim: 100716 kB' 'KernelStack: 6772 kB' 'PageTables: 4324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 324356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55496 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 6090752 kB' 'DirectMap1G: 8388608 kB' 00:05:35.873 06:44:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.873 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.873 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.873 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.873 06:44:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.873 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.873 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.873 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.873 06:44:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.873 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.874 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.874 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.875 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.875 06:44:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.875 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.875 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.875 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.875 06:44:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.875 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.875 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.875 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.875 06:44:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.875 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.875 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.875 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.875 06:44:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.875 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.875 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.875 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.875 06:44:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.875 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.875 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.875 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.875 06:44:40 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.875 06:44:40 -- setup/common.sh@33 -- # echo 0 00:05:35.875 06:44:40 -- setup/common.sh@33 -- # return 0 00:05:35.875 06:44:40 -- setup/hugepages.sh@100 -- # resv=0 00:05:35.875 nr_hugepages=512 00:05:35.875 06:44:40 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:35.875 resv_hugepages=0 00:05:35.875 06:44:40 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:35.875 surplus_hugepages=0 00:05:35.875 06:44:40 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:35.875 anon_hugepages=0 00:05:35.875 06:44:40 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:35.875 06:44:40 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:35.875 06:44:40 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:35.875 06:44:40 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:35.875 06:44:40 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:35.875 06:44:40 -- setup/common.sh@18 -- # local node= 00:05:35.875 06:44:40 -- setup/common.sh@19 -- # local var val 00:05:35.875 06:44:40 -- setup/common.sh@20 -- # local mem_f mem 00:05:35.875 06:44:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:35.875 06:44:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:35.875 06:44:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:35.875 06:44:40 -- setup/common.sh@28 -- # mapfile -t mem 00:05:35.875 06:44:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:35.875 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.875 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.875 06:44:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7729272 kB' 'MemAvailable: 10523472 kB' 'Buffers: 3704 kB' 'Cached: 2996872 kB' 'SwapCached: 0 kB' 'Active: 456548 kB' 'Inactive: 2662240 kB' 'Active(anon): 128704 kB' 'Inactive(anon): 0 kB' 'Active(file): 327844 kB' 'Inactive(file): 2662240 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119848 kB' 'Mapped: 50988 kB' 'Shmem: 10488 kB' 'KReclaimable: 82532 kB' 'Slab: 183240 kB' 'SReclaimable: 82532 kB' 'SUnreclaim: 100708 kB' 'KernelStack: 6768 kB' 'PageTables: 4516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 324356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55480 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 6090752 kB' 'DirectMap1G: 8388608 kB' 00:05:35.875 06:44:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.875 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.875 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.875 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.875 06:44:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.875 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.875 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.875 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.875 06:44:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.875 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.875 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.875 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.875 06:44:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.875 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.875 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.875 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.875 06:44:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.875 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.875 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.875 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.875 06:44:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.875 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.875 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.875 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.875 06:44:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.875 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.875 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.875 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.875 06:44:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.875 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.875 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.875 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.875 06:44:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.875 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.875 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.875 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.875 06:44:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.875 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.875 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.875 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.875 06:44:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.875 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.875 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.875 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.875 06:44:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.875 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.875 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.875 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.875 06:44:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.875 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.875 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.875 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.875 06:44:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.875 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.875 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.875 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.875 06:44:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.875 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.875 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.875 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.875 06:44:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.875 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.875 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.875 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.875 06:44:40 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.875 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.875 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.875 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.875 06:44:40 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.875 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.875 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.875 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.875 06:44:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.875 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.875 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.875 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.875 06:44:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.875 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.875 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.875 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.875 06:44:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.875 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.875 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.876 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.876 06:44:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.876 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.876 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.876 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.876 06:44:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.876 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.876 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.876 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.876 06:44:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.876 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.876 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.876 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.876 06:44:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.876 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.876 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.876 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.876 06:44:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.876 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.876 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.876 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.876 06:44:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.876 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.876 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.876 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.876 06:44:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.876 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.876 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.876 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.876 06:44:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.876 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.876 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.876 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.876 06:44:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.876 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.876 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.876 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.876 06:44:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.876 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.876 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.876 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.876 06:44:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.876 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.876 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.876 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.876 06:44:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.876 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.876 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.876 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.876 06:44:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.876 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.876 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.876 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.876 06:44:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.876 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.876 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.876 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.876 06:44:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.876 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.876 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.876 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.876 06:44:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.876 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.876 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.876 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.876 06:44:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.876 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.876 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.876 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.876 06:44:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.876 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.876 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.876 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.876 06:44:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.876 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.876 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.876 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.876 06:44:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.876 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.876 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.876 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.876 06:44:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.876 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.876 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.876 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.876 06:44:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.876 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.876 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.876 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.876 06:44:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.876 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.876 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.876 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.876 06:44:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.876 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.876 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.876 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.876 06:44:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.876 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.876 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.876 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.876 06:44:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.876 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.876 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.876 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.876 06:44:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.876 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.876 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.876 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.876 06:44:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.876 06:44:40 -- setup/common.sh@33 -- # echo 512 00:05:35.876 06:44:40 -- setup/common.sh@33 -- # return 0 00:05:35.876 06:44:40 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:35.876 06:44:40 -- setup/hugepages.sh@112 -- # get_nodes 00:05:35.876 06:44:40 -- setup/hugepages.sh@27 -- # local node 00:05:35.876 06:44:40 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:35.876 06:44:40 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:35.876 06:44:40 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:35.876 06:44:40 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:35.876 06:44:40 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:35.876 06:44:40 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:35.876 06:44:40 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:35.876 06:44:40 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:35.876 06:44:40 -- setup/common.sh@18 -- # local node=0 00:05:35.876 06:44:40 -- setup/common.sh@19 -- # local var val 00:05:35.876 06:44:40 -- setup/common.sh@20 -- # local mem_f mem 00:05:35.876 06:44:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:35.876 06:44:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:35.876 06:44:40 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:35.876 06:44:40 -- setup/common.sh@28 -- # mapfile -t mem 00:05:35.876 06:44:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:35.876 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.876 06:44:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7729272 kB' 'MemUsed: 4509836 kB' 'SwapCached: 0 kB' 'Active: 456776 kB' 'Inactive: 2662244 kB' 'Active(anon): 128932 kB' 'Inactive(anon): 0 kB' 'Active(file): 327844 kB' 'Inactive(file): 2662244 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 3000576 kB' 'Mapped: 50988 kB' 'AnonPages: 120080 kB' 'Shmem: 10488 kB' 'KernelStack: 6768 kB' 'PageTables: 4516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82532 kB' 'Slab: 183240 kB' 'SReclaimable: 82532 kB' 'SUnreclaim: 100708 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:35.876 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.876 06:44:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.876 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.876 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.876 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.876 06:44:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.876 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.876 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.876 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.876 06:44:40 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.876 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.876 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.876 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.876 06:44:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.876 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.876 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.876 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.876 06:44:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.876 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.876 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.876 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.876 06:44:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.876 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.876 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.877 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.877 06:44:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.877 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.877 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.877 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.877 06:44:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.877 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.877 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.877 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.877 06:44:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.877 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.877 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.877 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.877 06:44:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.877 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.877 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.877 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.877 06:44:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.877 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.877 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.877 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.877 06:44:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.877 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.877 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.877 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.877 06:44:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.877 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.877 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.877 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.877 06:44:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.877 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.877 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.877 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.877 06:44:40 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.877 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.877 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.877 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.877 06:44:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.877 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.877 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.877 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.877 06:44:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.877 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.877 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.877 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.877 06:44:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.877 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.877 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.877 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.877 06:44:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.877 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.877 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.877 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.877 06:44:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.877 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.877 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.877 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.877 06:44:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.877 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.877 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.877 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.877 06:44:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.877 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.877 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.877 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.877 06:44:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.877 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.877 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.877 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.877 06:44:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.877 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.877 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.877 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.877 06:44:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.877 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.877 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.877 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.877 06:44:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.877 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.877 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.877 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.877 06:44:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.877 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.877 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.877 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.877 06:44:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.877 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.877 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.877 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.877 06:44:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.877 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.877 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.877 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.877 06:44:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.877 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.877 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.877 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.877 06:44:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.877 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.877 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.877 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.877 06:44:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.877 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.877 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.877 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.877 06:44:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.877 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.877 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.877 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.877 06:44:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.877 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.877 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.877 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.877 06:44:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.877 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.877 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.877 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.877 06:44:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.877 06:44:40 -- setup/common.sh@32 -- # continue 00:05:35.877 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.877 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.877 06:44:40 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.877 06:44:40 -- setup/common.sh@33 -- # echo 0 00:05:35.877 06:44:40 -- setup/common.sh@33 -- # return 0 00:05:35.877 06:44:40 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:35.877 06:44:40 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:35.877 06:44:40 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:35.877 06:44:40 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:35.877 node0=512 expecting 512 00:05:35.877 06:44:40 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:35.877 06:44:40 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:35.877 00:05:35.877 real 0m0.554s 00:05:35.877 user 0m0.313s 00:05:35.877 sys 0m0.273s 00:05:35.877 06:44:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:35.877 06:44:40 -- common/autotest_common.sh@10 -- # set +x 00:05:35.877 ************************************ 00:05:35.877 END TEST custom_alloc 00:05:35.877 ************************************ 00:05:36.137 06:44:40 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:36.137 06:44:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:36.137 06:44:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:36.137 06:44:40 -- common/autotest_common.sh@10 -- # set +x 00:05:36.137 ************************************ 00:05:36.137 START TEST no_shrink_alloc 00:05:36.137 ************************************ 00:05:36.137 06:44:40 -- common/autotest_common.sh@1114 -- # no_shrink_alloc 00:05:36.137 06:44:40 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:36.137 06:44:40 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:36.137 06:44:40 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:36.137 06:44:40 -- setup/hugepages.sh@51 -- # shift 00:05:36.137 06:44:40 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:36.137 06:44:40 -- setup/hugepages.sh@52 -- # local node_ids 00:05:36.137 06:44:40 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:36.137 06:44:40 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:36.137 06:44:40 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:36.137 06:44:40 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:36.137 06:44:40 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:36.137 06:44:40 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:36.137 06:44:40 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:36.137 06:44:40 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:36.137 06:44:40 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:36.137 06:44:40 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:36.137 06:44:40 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:36.137 06:44:40 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:36.137 06:44:40 -- setup/hugepages.sh@73 -- # return 0 00:05:36.137 06:44:40 -- setup/hugepages.sh@198 -- # setup output 00:05:36.137 06:44:40 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:36.137 06:44:40 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:36.400 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:36.400 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:36.400 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:36.400 06:44:40 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:36.400 06:44:40 -- setup/hugepages.sh@89 -- # local node 00:05:36.400 06:44:40 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:36.400 06:44:40 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:36.400 06:44:40 -- setup/hugepages.sh@92 -- # local surp 00:05:36.400 06:44:40 -- setup/hugepages.sh@93 -- # local resv 00:05:36.400 06:44:40 -- setup/hugepages.sh@94 -- # local anon 00:05:36.400 06:44:40 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:36.400 06:44:40 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:36.400 06:44:40 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:36.400 06:44:40 -- setup/common.sh@18 -- # local node= 00:05:36.400 06:44:40 -- setup/common.sh@19 -- # local var val 00:05:36.400 06:44:40 -- setup/common.sh@20 -- # local mem_f mem 00:05:36.400 06:44:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:36.400 06:44:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:36.400 06:44:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:36.400 06:44:40 -- setup/common.sh@28 -- # mapfile -t mem 00:05:36.400 06:44:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:36.400 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.400 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.400 06:44:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6682416 kB' 'MemAvailable: 9476620 kB' 'Buffers: 3704 kB' 'Cached: 2996872 kB' 'SwapCached: 0 kB' 'Active: 457104 kB' 'Inactive: 2662244 kB' 'Active(anon): 129260 kB' 'Inactive(anon): 0 kB' 'Active(file): 327844 kB' 'Inactive(file): 2662244 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120332 kB' 'Mapped: 51104 kB' 'Shmem: 10488 kB' 'KReclaimable: 82532 kB' 'Slab: 183184 kB' 'SReclaimable: 82532 kB' 'SUnreclaim: 100652 kB' 'KernelStack: 6788 kB' 'PageTables: 4636 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 324556 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55528 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 6090752 kB' 'DirectMap1G: 8388608 kB' 00:05:36.400 06:44:40 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.400 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.400 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.400 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.400 06:44:40 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.400 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.400 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.400 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.400 06:44:40 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.400 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.400 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.400 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.400 06:44:40 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.400 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.400 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.400 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.400 06:44:40 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.400 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.400 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.400 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.400 06:44:40 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.400 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.400 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.400 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.400 06:44:40 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.400 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.401 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.401 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.401 06:44:40 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.401 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.401 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.401 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.401 06:44:40 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.401 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.401 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.401 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.401 06:44:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.401 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.401 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.401 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.401 06:44:40 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.401 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.401 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.401 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.401 06:44:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.401 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.401 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.401 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.401 06:44:40 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.401 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.401 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.401 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.401 06:44:40 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.401 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.401 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.401 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.401 06:44:40 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.401 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.401 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.401 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.401 06:44:40 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.401 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.401 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.401 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.401 06:44:40 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.401 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.401 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.401 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.401 06:44:40 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.401 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.401 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.401 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.401 06:44:40 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.401 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.401 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.401 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.401 06:44:40 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.401 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.401 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.401 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.401 06:44:40 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.401 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.401 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.401 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.401 06:44:40 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.401 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.401 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.401 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.401 06:44:40 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.401 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.401 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.401 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.401 06:44:40 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.401 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.401 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.401 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.401 06:44:40 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.401 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.401 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.401 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.401 06:44:40 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.401 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.401 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.401 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.401 06:44:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.401 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.401 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.401 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.401 06:44:40 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.401 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.401 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.401 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.401 06:44:40 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.401 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.401 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.401 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.401 06:44:40 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.401 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.401 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.401 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.401 06:44:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.401 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.401 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.401 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.401 06:44:40 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.401 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.401 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.401 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.401 06:44:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.401 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.401 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.401 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.401 06:44:40 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.401 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.401 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.401 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.401 06:44:40 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.401 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.401 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.401 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.401 06:44:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.401 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.401 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.401 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.401 06:44:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.401 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.401 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.401 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.401 06:44:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.401 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.401 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.401 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.401 06:44:40 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.401 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.401 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.401 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.401 06:44:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.401 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.401 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.401 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.401 06:44:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.401 06:44:40 -- setup/common.sh@33 -- # echo 0 00:05:36.401 06:44:40 -- setup/common.sh@33 -- # return 0 00:05:36.401 06:44:40 -- setup/hugepages.sh@97 -- # anon=0 00:05:36.401 06:44:40 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:36.401 06:44:40 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:36.401 06:44:40 -- setup/common.sh@18 -- # local node= 00:05:36.401 06:44:40 -- setup/common.sh@19 -- # local var val 00:05:36.401 06:44:40 -- setup/common.sh@20 -- # local mem_f mem 00:05:36.401 06:44:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:36.401 06:44:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:36.401 06:44:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:36.401 06:44:40 -- setup/common.sh@28 -- # mapfile -t mem 00:05:36.401 06:44:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:36.401 06:44:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6682436 kB' 'MemAvailable: 9476640 kB' 'Buffers: 3704 kB' 'Cached: 2996872 kB' 'SwapCached: 0 kB' 'Active: 456668 kB' 'Inactive: 2662244 kB' 'Active(anon): 128824 kB' 'Inactive(anon): 0 kB' 'Active(file): 327844 kB' 'Inactive(file): 2662244 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120168 kB' 'Mapped: 50988 kB' 'Shmem: 10488 kB' 'KReclaimable: 82532 kB' 'Slab: 183188 kB' 'SReclaimable: 82532 kB' 'SUnreclaim: 100656 kB' 'KernelStack: 6800 kB' 'PageTables: 4616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 324556 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 6090752 kB' 'DirectMap1G: 8388608 kB' 00:05:36.401 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.401 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.402 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.402 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.403 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.403 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.403 06:44:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.403 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.403 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.403 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.403 06:44:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.403 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.403 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.403 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.403 06:44:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.403 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.403 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.403 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.403 06:44:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.403 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.403 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.403 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.403 06:44:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.403 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.403 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.403 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.403 06:44:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.403 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.403 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.403 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.403 06:44:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.403 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.403 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.403 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.403 06:44:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.403 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.403 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.403 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.403 06:44:40 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.403 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.403 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.403 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.403 06:44:40 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.403 06:44:40 -- setup/common.sh@33 -- # echo 0 00:05:36.403 06:44:40 -- setup/common.sh@33 -- # return 0 00:05:36.403 06:44:40 -- setup/hugepages.sh@99 -- # surp=0 00:05:36.403 06:44:40 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:36.403 06:44:40 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:36.403 06:44:40 -- setup/common.sh@18 -- # local node= 00:05:36.403 06:44:40 -- setup/common.sh@19 -- # local var val 00:05:36.403 06:44:40 -- setup/common.sh@20 -- # local mem_f mem 00:05:36.403 06:44:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:36.403 06:44:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:36.403 06:44:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:36.403 06:44:40 -- setup/common.sh@28 -- # mapfile -t mem 00:05:36.403 06:44:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:36.403 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.403 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.403 06:44:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6683736 kB' 'MemAvailable: 9477940 kB' 'Buffers: 3704 kB' 'Cached: 2996872 kB' 'SwapCached: 0 kB' 'Active: 454528 kB' 'Inactive: 2662244 kB' 'Active(anon): 126684 kB' 'Inactive(anon): 0 kB' 'Active(file): 327844 kB' 'Inactive(file): 2662244 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117720 kB' 'Mapped: 50208 kB' 'Shmem: 10488 kB' 'KReclaimable: 82532 kB' 'Slab: 183168 kB' 'SReclaimable: 82532 kB' 'SUnreclaim: 100636 kB' 'KernelStack: 6800 kB' 'PageTables: 4620 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 303916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55464 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 6090752 kB' 'DirectMap1G: 8388608 kB' 00:05:36.403 06:44:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.403 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.403 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.403 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.403 06:44:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.403 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.403 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.403 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.403 06:44:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.403 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.403 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.403 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.403 06:44:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.403 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.403 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.403 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.403 06:44:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.403 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.403 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.403 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.403 06:44:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.403 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.403 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.403 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.403 06:44:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.403 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.403 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.403 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.403 06:44:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.403 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.403 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.403 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.403 06:44:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.403 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.403 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.403 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.403 06:44:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.403 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.403 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.403 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.403 06:44:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.403 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.403 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.403 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.403 06:44:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.403 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.403 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.403 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.403 06:44:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.403 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.403 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.403 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.403 06:44:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.403 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.403 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.403 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.403 06:44:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.403 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.403 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.403 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.403 06:44:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.403 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.403 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.403 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.403 06:44:40 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.403 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.403 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.403 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.403 06:44:40 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.403 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.403 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.403 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.403 06:44:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.403 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.403 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.403 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.403 06:44:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.403 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.403 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.403 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.403 06:44:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.403 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.403 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.403 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.403 06:44:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.403 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.403 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.403 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.403 06:44:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.403 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.403 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.403 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.403 06:44:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.403 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.403 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.403 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.404 06:44:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.404 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.404 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.404 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.404 06:44:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.404 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.404 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.404 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.404 06:44:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.404 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.404 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.404 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.404 06:44:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.404 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.404 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.404 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.404 06:44:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.404 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.404 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.404 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.404 06:44:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.404 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.404 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.404 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.404 06:44:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.404 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.404 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.404 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.404 06:44:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.404 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.404 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.404 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.404 06:44:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.404 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.404 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.404 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.404 06:44:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.404 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.404 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.404 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.404 06:44:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.404 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.404 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.404 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.404 06:44:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.404 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.404 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.404 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.404 06:44:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.404 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.404 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.404 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.404 06:44:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.404 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.404 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.404 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.404 06:44:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.404 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.404 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.404 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.404 06:44:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.404 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.404 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.404 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.404 06:44:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.404 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.404 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.404 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.404 06:44:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.404 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.404 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.404 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.404 06:44:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.404 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.404 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.404 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.404 06:44:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.404 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.404 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.404 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.404 06:44:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.404 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.404 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.404 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.404 06:44:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.404 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.404 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.404 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.404 06:44:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.404 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.404 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.404 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.404 06:44:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.404 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.404 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.404 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.404 06:44:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.404 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.404 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.404 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.404 06:44:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.404 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.404 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.404 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.404 06:44:40 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.404 06:44:40 -- setup/common.sh@33 -- # echo 0 00:05:36.404 06:44:40 -- setup/common.sh@33 -- # return 0 00:05:36.404 06:44:40 -- setup/hugepages.sh@100 -- # resv=0 00:05:36.404 nr_hugepages=1024 00:05:36.404 06:44:40 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:36.404 resv_hugepages=0 00:05:36.404 06:44:40 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:36.404 surplus_hugepages=0 00:05:36.404 06:44:40 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:36.404 anon_hugepages=0 00:05:36.404 06:44:40 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:36.404 06:44:40 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:36.404 06:44:40 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:36.404 06:44:40 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:36.404 06:44:40 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:36.404 06:44:40 -- setup/common.sh@18 -- # local node= 00:05:36.404 06:44:40 -- setup/common.sh@19 -- # local var val 00:05:36.404 06:44:40 -- setup/common.sh@20 -- # local mem_f mem 00:05:36.404 06:44:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:36.404 06:44:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:36.404 06:44:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:36.404 06:44:40 -- setup/common.sh@28 -- # mapfile -t mem 00:05:36.404 06:44:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:36.404 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.404 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.404 06:44:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6683848 kB' 'MemAvailable: 9478052 kB' 'Buffers: 3704 kB' 'Cached: 2996872 kB' 'SwapCached: 0 kB' 'Active: 454104 kB' 'Inactive: 2662244 kB' 'Active(anon): 126260 kB' 'Inactive(anon): 0 kB' 'Active(file): 327844 kB' 'Inactive(file): 2662244 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117524 kB' 'Mapped: 50140 kB' 'Shmem: 10488 kB' 'KReclaimable: 82532 kB' 'Slab: 183072 kB' 'SReclaimable: 82532 kB' 'SUnreclaim: 100540 kB' 'KernelStack: 6688 kB' 'PageTables: 4100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 303916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55384 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 6090752 kB' 'DirectMap1G: 8388608 kB' 00:05:36.404 06:44:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.404 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.404 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.404 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.404 06:44:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.404 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.404 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.404 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.404 06:44:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.404 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.404 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.404 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.404 06:44:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.404 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.404 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.404 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.404 06:44:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.404 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.404 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.404 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.404 06:44:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.404 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.405 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.405 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.405 06:44:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.405 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.405 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.405 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.405 06:44:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.405 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.405 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.405 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.405 06:44:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.405 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.405 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.405 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.405 06:44:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.405 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.405 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.405 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.405 06:44:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.405 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.405 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.405 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.405 06:44:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.405 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.405 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.405 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.405 06:44:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.405 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.405 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.405 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.405 06:44:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.405 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.405 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.405 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.405 06:44:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.405 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.405 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.405 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.405 06:44:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.405 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.405 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.405 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.405 06:44:40 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.405 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.405 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.405 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.405 06:44:40 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.405 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.405 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.405 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.405 06:44:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.405 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.405 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.405 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.405 06:44:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.405 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.405 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.405 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.405 06:44:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.405 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.405 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.405 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.405 06:44:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.405 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.405 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.405 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.405 06:44:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.405 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.405 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.405 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.405 06:44:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.405 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.405 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.405 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.405 06:44:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.405 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.405 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.405 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.405 06:44:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.405 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.405 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.405 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.405 06:44:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.405 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.405 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.405 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.405 06:44:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.405 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.405 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.405 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.405 06:44:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.405 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.405 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.405 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.405 06:44:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.405 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.405 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.405 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.405 06:44:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.405 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.405 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.405 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.405 06:44:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.405 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.405 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.405 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.405 06:44:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.405 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.405 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.405 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.405 06:44:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.405 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.405 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.405 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.405 06:44:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.405 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.405 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.405 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.405 06:44:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.405 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.405 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.405 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.405 06:44:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.405 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.405 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.405 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.405 06:44:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.405 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.405 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.405 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.405 06:44:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.405 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.405 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.405 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.405 06:44:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.405 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.405 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.405 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.405 06:44:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.405 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.405 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.405 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.405 06:44:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.405 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.405 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.405 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.406 06:44:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.406 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.406 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.406 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.406 06:44:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.406 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.406 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.406 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.406 06:44:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.406 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.406 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.406 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.406 06:44:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.406 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.406 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.406 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.406 06:44:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.406 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.406 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.406 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.406 06:44:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.406 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.406 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.406 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.406 06:44:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.406 06:44:40 -- setup/common.sh@33 -- # echo 1024 00:05:36.406 06:44:40 -- setup/common.sh@33 -- # return 0 00:05:36.406 06:44:40 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:36.406 06:44:40 -- setup/hugepages.sh@112 -- # get_nodes 00:05:36.406 06:44:40 -- setup/hugepages.sh@27 -- # local node 00:05:36.406 06:44:40 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:36.406 06:44:40 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:36.406 06:44:40 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:36.406 06:44:40 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:36.406 06:44:40 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:36.406 06:44:40 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:36.406 06:44:40 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:36.406 06:44:40 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:36.406 06:44:40 -- setup/common.sh@18 -- # local node=0 00:05:36.406 06:44:40 -- setup/common.sh@19 -- # local var val 00:05:36.406 06:44:40 -- setup/common.sh@20 -- # local mem_f mem 00:05:36.406 06:44:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:36.406 06:44:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:36.406 06:44:40 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:36.406 06:44:40 -- setup/common.sh@28 -- # mapfile -t mem 00:05:36.406 06:44:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:36.406 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.406 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.406 06:44:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6683848 kB' 'MemUsed: 5555260 kB' 'SwapCached: 0 kB' 'Active: 453864 kB' 'Inactive: 2662244 kB' 'Active(anon): 126020 kB' 'Inactive(anon): 0 kB' 'Active(file): 327844 kB' 'Inactive(file): 2662244 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 3000576 kB' 'Mapped: 50140 kB' 'AnonPages: 117268 kB' 'Shmem: 10488 kB' 'KernelStack: 6656 kB' 'PageTables: 3944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82520 kB' 'Slab: 183008 kB' 'SReclaimable: 82520 kB' 'SUnreclaim: 100488 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:36.406 06:44:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.406 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.406 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.406 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.406 06:44:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.406 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.406 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.406 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.406 06:44:40 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.406 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.406 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.406 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.406 06:44:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.406 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.406 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.406 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.406 06:44:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.406 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.406 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.406 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.406 06:44:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.406 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.406 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.406 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.406 06:44:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.406 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.406 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.406 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.406 06:44:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.406 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.406 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.406 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.406 06:44:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.406 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.406 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.406 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.406 06:44:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.406 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.406 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.406 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.406 06:44:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.406 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.406 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.406 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.406 06:44:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.406 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.406 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.406 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.406 06:44:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.406 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.406 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.406 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.406 06:44:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.406 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.406 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.406 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.406 06:44:40 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.406 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.406 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.406 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.406 06:44:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.406 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.406 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.406 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.406 06:44:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.406 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.406 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.406 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.406 06:44:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.406 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.406 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.406 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.406 06:44:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.406 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.406 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.406 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.406 06:44:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.406 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.406 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.406 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.406 06:44:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.406 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.406 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.406 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.406 06:44:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.406 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.406 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.406 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.406 06:44:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.406 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.406 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.406 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.406 06:44:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.406 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.406 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.406 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.406 06:44:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.406 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.406 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.406 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.406 06:44:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.406 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.406 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.406 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.406 06:44:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.406 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.406 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.406 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.406 06:44:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.407 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.407 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.407 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.407 06:44:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.407 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.407 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.407 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.407 06:44:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.407 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.407 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.407 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.407 06:44:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.407 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.407 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.407 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.407 06:44:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.407 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.407 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.407 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.407 06:44:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.407 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.407 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.407 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.407 06:44:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.407 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.407 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.407 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.407 06:44:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.407 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.407 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.407 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.407 06:44:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.407 06:44:40 -- setup/common.sh@32 -- # continue 00:05:36.407 06:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.407 06:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.407 06:44:40 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.407 06:44:40 -- setup/common.sh@33 -- # echo 0 00:05:36.407 06:44:40 -- setup/common.sh@33 -- # return 0 00:05:36.407 06:44:40 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:36.407 06:44:40 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:36.407 06:44:40 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:36.407 06:44:40 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:36.407 node0=1024 expecting 1024 00:05:36.407 06:44:40 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:36.407 06:44:40 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:36.407 06:44:40 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:36.407 06:44:40 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:36.407 06:44:40 -- setup/hugepages.sh@202 -- # setup output 00:05:36.407 06:44:40 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:36.407 06:44:40 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:36.978 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:36.978 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:36.978 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:36.978 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:36.978 06:44:41 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:36.978 06:44:41 -- setup/hugepages.sh@89 -- # local node 00:05:36.978 06:44:41 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:36.978 06:44:41 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:36.978 06:44:41 -- setup/hugepages.sh@92 -- # local surp 00:05:36.978 06:44:41 -- setup/hugepages.sh@93 -- # local resv 00:05:36.978 06:44:41 -- setup/hugepages.sh@94 -- # local anon 00:05:36.978 06:44:41 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:36.978 06:44:41 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:36.978 06:44:41 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:36.978 06:44:41 -- setup/common.sh@18 -- # local node= 00:05:36.978 06:44:41 -- setup/common.sh@19 -- # local var val 00:05:36.978 06:44:41 -- setup/common.sh@20 -- # local mem_f mem 00:05:36.978 06:44:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:36.978 06:44:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:36.978 06:44:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:36.978 06:44:41 -- setup/common.sh@28 -- # mapfile -t mem 00:05:36.978 06:44:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:36.978 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.978 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.978 06:44:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6684508 kB' 'MemAvailable: 9478704 kB' 'Buffers: 3704 kB' 'Cached: 2996872 kB' 'SwapCached: 0 kB' 'Active: 454752 kB' 'Inactive: 2662244 kB' 'Active(anon): 126908 kB' 'Inactive(anon): 0 kB' 'Active(file): 327844 kB' 'Inactive(file): 2662244 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 118040 kB' 'Mapped: 50572 kB' 'Shmem: 10488 kB' 'KReclaimable: 82520 kB' 'Slab: 182852 kB' 'SReclaimable: 82520 kB' 'SUnreclaim: 100332 kB' 'KernelStack: 6692 kB' 'PageTables: 4092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 303916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55416 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 6090752 kB' 'DirectMap1G: 8388608 kB' 00:05:36.978 06:44:41 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.978 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.978 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.978 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.978 06:44:41 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.978 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.978 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.978 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.978 06:44:41 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.978 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.978 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.978 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.978 06:44:41 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.978 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.978 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.978 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.978 06:44:41 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.978 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.979 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.979 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.979 06:44:41 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.979 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.979 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.979 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.979 06:44:41 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.979 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.979 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.979 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.979 06:44:41 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.979 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.979 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.979 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.979 06:44:41 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.979 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.979 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.979 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.979 06:44:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.979 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.979 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.979 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.979 06:44:41 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.979 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.979 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.979 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.979 06:44:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.979 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.979 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.979 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.979 06:44:41 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.979 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.979 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.979 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.979 06:44:41 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.979 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.979 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.979 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.979 06:44:41 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.979 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.979 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.979 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.979 06:44:41 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.979 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.979 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.979 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.979 06:44:41 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.979 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.979 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.979 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.979 06:44:41 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.979 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.979 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.979 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.979 06:44:41 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.979 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.979 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.979 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.979 06:44:41 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.979 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.979 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.979 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.979 06:44:41 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.979 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.979 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.979 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.979 06:44:41 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.979 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.979 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.979 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.979 06:44:41 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.979 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.979 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.979 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.979 06:44:41 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.979 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.979 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.979 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.979 06:44:41 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.979 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.979 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.979 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.979 06:44:41 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.979 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.979 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.979 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.979 06:44:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.979 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.979 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.979 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.979 06:44:41 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.979 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.979 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.979 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.979 06:44:41 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.979 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.979 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.979 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.979 06:44:41 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.979 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.979 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.979 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.979 06:44:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.979 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.979 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.979 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.979 06:44:41 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.979 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.979 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.979 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.979 06:44:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.979 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.979 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.979 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.979 06:44:41 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.979 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.979 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.979 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.979 06:44:41 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.979 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.979 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.979 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.979 06:44:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.979 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.979 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.979 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.979 06:44:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.979 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.979 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.979 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.979 06:44:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.979 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.979 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.979 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.979 06:44:41 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.979 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.979 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.979 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.979 06:44:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.979 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.979 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.979 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.979 06:44:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.979 06:44:41 -- setup/common.sh@33 -- # echo 0 00:05:36.979 06:44:41 -- setup/common.sh@33 -- # return 0 00:05:36.979 06:44:41 -- setup/hugepages.sh@97 -- # anon=0 00:05:36.979 06:44:41 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:36.979 06:44:41 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:36.979 06:44:41 -- setup/common.sh@18 -- # local node= 00:05:36.979 06:44:41 -- setup/common.sh@19 -- # local var val 00:05:36.979 06:44:41 -- setup/common.sh@20 -- # local mem_f mem 00:05:36.979 06:44:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:36.979 06:44:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:36.979 06:44:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:36.979 06:44:41 -- setup/common.sh@28 -- # mapfile -t mem 00:05:36.979 06:44:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:36.979 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.979 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.980 06:44:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6684508 kB' 'MemAvailable: 9478704 kB' 'Buffers: 3704 kB' 'Cached: 2996872 kB' 'SwapCached: 0 kB' 'Active: 454092 kB' 'Inactive: 2662244 kB' 'Active(anon): 126248 kB' 'Inactive(anon): 0 kB' 'Active(file): 327844 kB' 'Inactive(file): 2662244 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117384 kB' 'Mapped: 50312 kB' 'Shmem: 10488 kB' 'KReclaimable: 82520 kB' 'Slab: 182852 kB' 'SReclaimable: 82520 kB' 'SUnreclaim: 100332 kB' 'KernelStack: 6632 kB' 'PageTables: 3728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 303916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55368 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 6090752 kB' 'DirectMap1G: 8388608 kB' 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.980 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.980 06:44:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.981 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.981 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.981 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.981 06:44:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.981 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.981 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.981 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.981 06:44:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.981 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.981 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.981 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.981 06:44:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.981 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.981 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.981 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.981 06:44:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.981 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.981 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.981 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.981 06:44:41 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.981 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.981 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.981 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.981 06:44:41 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.981 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.981 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.981 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.981 06:44:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.981 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.981 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.981 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.981 06:44:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.981 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.981 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.981 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.981 06:44:41 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.981 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.981 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.981 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.981 06:44:41 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.981 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.981 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.981 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.981 06:44:41 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.981 06:44:41 -- setup/common.sh@33 -- # echo 0 00:05:36.981 06:44:41 -- setup/common.sh@33 -- # return 0 00:05:36.981 06:44:41 -- setup/hugepages.sh@99 -- # surp=0 00:05:36.981 06:44:41 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:36.981 06:44:41 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:36.981 06:44:41 -- setup/common.sh@18 -- # local node= 00:05:36.981 06:44:41 -- setup/common.sh@19 -- # local var val 00:05:36.981 06:44:41 -- setup/common.sh@20 -- # local mem_f mem 00:05:36.981 06:44:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:36.981 06:44:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:36.981 06:44:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:36.981 06:44:41 -- setup/common.sh@28 -- # mapfile -t mem 00:05:36.981 06:44:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:36.981 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.981 06:44:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6685516 kB' 'MemAvailable: 9479712 kB' 'Buffers: 3704 kB' 'Cached: 2996872 kB' 'SwapCached: 0 kB' 'Active: 453576 kB' 'Inactive: 2662244 kB' 'Active(anon): 125732 kB' 'Inactive(anon): 0 kB' 'Active(file): 327844 kB' 'Inactive(file): 2662244 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117112 kB' 'Mapped: 50140 kB' 'Shmem: 10488 kB' 'KReclaimable: 82520 kB' 'Slab: 182860 kB' 'SReclaimable: 82520 kB' 'SUnreclaim: 100340 kB' 'KernelStack: 6656 kB' 'PageTables: 3920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 303916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55368 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 6090752 kB' 'DirectMap1G: 8388608 kB' 00:05:36.981 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.981 06:44:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.981 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.981 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.981 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.981 06:44:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.981 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.981 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.981 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.981 06:44:41 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.981 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.981 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.981 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.981 06:44:41 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.981 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.981 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.981 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.981 06:44:41 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.981 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.981 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.981 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.981 06:44:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.981 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.981 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.981 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.981 06:44:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.981 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.981 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.981 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.981 06:44:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.981 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.981 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.981 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.981 06:44:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.981 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.981 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.981 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.981 06:44:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.981 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.981 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.981 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.981 06:44:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.981 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.981 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.981 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.981 06:44:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.981 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.981 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.981 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.981 06:44:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.981 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.981 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.981 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.981 06:44:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.981 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.981 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.981 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.981 06:44:41 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.981 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.981 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.981 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.981 06:44:41 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.981 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.981 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.981 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.981 06:44:41 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.981 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.981 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.981 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.981 06:44:41 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.981 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.981 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.981 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.981 06:44:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.981 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.981 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.981 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.981 06:44:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.981 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.981 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.981 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.981 06:44:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.981 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.981 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.981 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.981 06:44:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.981 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.981 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.981 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.981 06:44:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.981 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.981 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.981 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.982 06:44:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.982 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.982 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.982 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.982 06:44:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.982 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.982 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.982 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.982 06:44:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.982 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.982 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.982 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.982 06:44:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.982 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.982 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.982 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.982 06:44:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.982 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.982 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.982 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.982 06:44:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.982 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.982 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.982 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.982 06:44:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.982 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.982 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.982 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.982 06:44:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.982 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.982 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.982 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.982 06:44:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.982 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.982 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.982 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.982 06:44:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.982 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.982 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.982 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.982 06:44:41 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.982 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.982 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.982 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.982 06:44:41 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.982 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.982 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.982 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.982 06:44:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.982 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.982 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.982 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.982 06:44:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.982 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.982 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.982 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.982 06:44:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.982 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.982 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.982 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.982 06:44:41 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.982 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.982 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.982 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.982 06:44:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.982 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.982 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.982 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.982 06:44:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.982 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.982 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.982 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.982 06:44:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.982 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.982 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.982 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.982 06:44:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.982 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.982 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.982 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.982 06:44:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.982 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.982 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.982 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.982 06:44:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.982 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.982 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.982 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.982 06:44:41 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.982 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.982 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.982 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.982 06:44:41 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.982 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.982 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.982 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.982 06:44:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.982 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.982 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.982 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.982 06:44:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.982 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.982 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.982 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.982 06:44:41 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.982 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.982 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.982 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.982 06:44:41 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.982 06:44:41 -- setup/common.sh@33 -- # echo 0 00:05:36.982 06:44:41 -- setup/common.sh@33 -- # return 0 00:05:36.982 nr_hugepages=1024 00:05:36.982 resv_hugepages=0 00:05:36.982 surplus_hugepages=0 00:05:36.982 06:44:41 -- setup/hugepages.sh@100 -- # resv=0 00:05:36.982 06:44:41 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:36.982 06:44:41 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:36.982 06:44:41 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:36.982 anon_hugepages=0 00:05:36.982 06:44:41 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:36.982 06:44:41 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:36.982 06:44:41 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:36.982 06:44:41 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:36.982 06:44:41 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:36.982 06:44:41 -- setup/common.sh@18 -- # local node= 00:05:36.982 06:44:41 -- setup/common.sh@19 -- # local var val 00:05:36.982 06:44:41 -- setup/common.sh@20 -- # local mem_f mem 00:05:36.982 06:44:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:36.982 06:44:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:36.982 06:44:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:36.982 06:44:41 -- setup/common.sh@28 -- # mapfile -t mem 00:05:36.982 06:44:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:36.982 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.982 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.982 06:44:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6685516 kB' 'MemAvailable: 9479712 kB' 'Buffers: 3704 kB' 'Cached: 2996872 kB' 'SwapCached: 0 kB' 'Active: 453592 kB' 'Inactive: 2662244 kB' 'Active(anon): 125748 kB' 'Inactive(anon): 0 kB' 'Active(file): 327844 kB' 'Inactive(file): 2662244 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117120 kB' 'Mapped: 50140 kB' 'Shmem: 10488 kB' 'KReclaimable: 82520 kB' 'Slab: 182860 kB' 'SReclaimable: 82520 kB' 'SUnreclaim: 100340 kB' 'KernelStack: 6640 kB' 'PageTables: 3868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 303916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55384 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 6090752 kB' 'DirectMap1G: 8388608 kB' 00:05:36.982 06:44:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.982 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.982 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.982 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.982 06:44:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.982 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.982 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.982 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.982 06:44:41 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.982 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.982 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.982 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.982 06:44:41 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.982 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.983 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.983 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.983 06:44:41 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.983 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.983 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.983 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.983 06:44:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.983 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.983 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.983 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.983 06:44:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.983 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.983 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.983 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.983 06:44:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.983 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.983 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.983 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.983 06:44:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.983 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.983 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.983 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.983 06:44:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.983 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.983 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.983 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.983 06:44:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.983 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.983 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.983 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.983 06:44:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.983 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.983 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.983 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.983 06:44:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.983 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.983 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.983 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.983 06:44:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.983 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.983 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.983 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.983 06:44:41 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.983 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.983 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.983 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.983 06:44:41 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.983 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.983 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.983 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.983 06:44:41 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.983 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.983 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.983 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.983 06:44:41 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.983 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.983 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.983 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.983 06:44:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.983 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.983 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.983 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.983 06:44:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.983 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.983 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.983 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.983 06:44:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.983 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.983 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.983 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.983 06:44:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.983 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.983 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.983 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.983 06:44:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.983 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.983 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.983 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.983 06:44:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.983 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.983 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.983 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.983 06:44:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.983 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.983 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.983 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.983 06:44:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.983 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.983 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.983 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.983 06:44:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.983 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.983 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.983 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.983 06:44:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.983 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.983 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.983 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.983 06:44:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.983 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.983 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.983 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.983 06:44:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.983 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.983 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.983 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.983 06:44:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.983 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.983 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.983 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.983 06:44:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.983 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.983 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.983 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.983 06:44:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.983 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.983 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.983 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.983 06:44:41 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.983 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.983 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.983 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.983 06:44:41 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.983 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.983 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.983 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.983 06:44:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.983 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.983 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.983 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.983 06:44:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.983 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.983 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.983 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.983 06:44:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.983 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.983 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.983 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.983 06:44:41 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.983 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.983 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.983 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.984 06:44:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.984 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.984 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.984 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.984 06:44:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.984 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.984 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.984 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.984 06:44:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.984 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.984 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.984 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.984 06:44:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.984 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.984 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.984 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.984 06:44:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.984 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.984 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.984 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.984 06:44:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.984 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.984 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.984 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.984 06:44:41 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.984 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.984 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.984 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.984 06:44:41 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.984 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.984 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.984 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.984 06:44:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.984 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.984 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.984 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.984 06:44:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.984 06:44:41 -- setup/common.sh@33 -- # echo 1024 00:05:36.984 06:44:41 -- setup/common.sh@33 -- # return 0 00:05:36.984 06:44:41 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:36.984 06:44:41 -- setup/hugepages.sh@112 -- # get_nodes 00:05:36.984 06:44:41 -- setup/hugepages.sh@27 -- # local node 00:05:36.984 06:44:41 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:36.984 06:44:41 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:36.984 06:44:41 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:36.984 06:44:41 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:36.984 06:44:41 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:36.984 06:44:41 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:36.984 06:44:41 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:36.984 06:44:41 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:36.984 06:44:41 -- setup/common.sh@18 -- # local node=0 00:05:36.984 06:44:41 -- setup/common.sh@19 -- # local var val 00:05:36.984 06:44:41 -- setup/common.sh@20 -- # local mem_f mem 00:05:36.984 06:44:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:36.984 06:44:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:36.984 06:44:41 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:36.984 06:44:41 -- setup/common.sh@28 -- # mapfile -t mem 00:05:36.984 06:44:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:36.984 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.984 06:44:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6685516 kB' 'MemUsed: 5553592 kB' 'SwapCached: 0 kB' 'Active: 453560 kB' 'Inactive: 2662244 kB' 'Active(anon): 125716 kB' 'Inactive(anon): 0 kB' 'Active(file): 327844 kB' 'Inactive(file): 2662244 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 3000576 kB' 'Mapped: 50140 kB' 'AnonPages: 117108 kB' 'Shmem: 10488 kB' 'KernelStack: 6656 kB' 'PageTables: 3920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82520 kB' 'Slab: 182860 kB' 'SReclaimable: 82520 kB' 'SUnreclaim: 100340 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:36.984 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.984 06:44:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.984 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.984 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.984 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.984 06:44:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.984 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.984 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.984 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.984 06:44:41 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.984 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.984 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.984 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.984 06:44:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.984 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.984 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.984 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.984 06:44:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.984 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.984 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.984 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.984 06:44:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.984 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.984 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.984 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.984 06:44:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.984 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.984 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.984 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.984 06:44:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.984 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.984 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.984 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.984 06:44:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.984 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.984 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.984 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.984 06:44:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.984 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.984 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.984 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.984 06:44:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.984 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.984 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.984 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.984 06:44:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.984 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.984 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.984 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.984 06:44:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.984 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.984 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.984 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.984 06:44:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.984 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.984 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.984 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.984 06:44:41 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.984 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.984 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.984 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.984 06:44:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.984 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.984 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.984 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.984 06:44:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.984 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.984 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.984 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.984 06:44:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.984 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.984 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.984 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.984 06:44:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.984 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.984 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.984 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.984 06:44:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.984 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.984 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.984 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.984 06:44:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.984 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.984 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.984 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.984 06:44:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.984 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.984 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.984 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.984 06:44:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.984 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.984 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.984 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.984 06:44:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.984 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.984 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.985 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.985 06:44:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.985 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.985 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.985 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.985 06:44:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.985 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.985 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.985 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.985 06:44:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.985 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.985 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.985 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.985 06:44:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.985 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.985 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.985 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.985 06:44:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.985 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.985 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.985 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.985 06:44:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.985 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.985 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.985 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.985 06:44:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.985 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.985 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.985 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.985 06:44:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.985 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.985 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.985 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.985 06:44:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.985 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.985 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.985 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.985 06:44:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.985 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.985 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.985 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.985 06:44:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.985 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.985 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.985 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.985 06:44:41 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.985 06:44:41 -- setup/common.sh@32 -- # continue 00:05:36.985 06:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.985 06:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.985 06:44:41 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.985 06:44:41 -- setup/common.sh@33 -- # echo 0 00:05:36.985 06:44:41 -- setup/common.sh@33 -- # return 0 00:05:36.985 06:44:41 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:36.985 06:44:41 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:36.985 06:44:41 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:36.985 06:44:41 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:36.985 node0=1024 expecting 1024 00:05:36.985 06:44:41 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:36.985 06:44:41 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:36.985 00:05:36.985 real 0m1.026s 00:05:36.985 user 0m0.524s 00:05:36.985 sys 0m0.567s 00:05:36.985 06:44:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:36.985 06:44:41 -- common/autotest_common.sh@10 -- # set +x 00:05:36.985 ************************************ 00:05:36.985 END TEST no_shrink_alloc 00:05:36.985 ************************************ 00:05:36.985 06:44:41 -- setup/hugepages.sh@217 -- # clear_hp 00:05:36.985 06:44:41 -- setup/hugepages.sh@37 -- # local node hp 00:05:36.985 06:44:41 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:36.985 06:44:41 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:36.985 06:44:41 -- setup/hugepages.sh@41 -- # echo 0 00:05:36.985 06:44:41 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:36.985 06:44:41 -- setup/hugepages.sh@41 -- # echo 0 00:05:36.985 06:44:41 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:36.985 06:44:41 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:36.985 00:05:36.985 real 0m4.817s 00:05:36.985 user 0m2.414s 00:05:36.985 sys 0m2.399s 00:05:36.985 06:44:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:36.985 06:44:41 -- common/autotest_common.sh@10 -- # set +x 00:05:36.985 ************************************ 00:05:36.985 END TEST hugepages 00:05:36.985 ************************************ 00:05:37.244 06:44:41 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:37.244 06:44:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:37.244 06:44:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:37.244 06:44:41 -- common/autotest_common.sh@10 -- # set +x 00:05:37.244 ************************************ 00:05:37.244 START TEST driver 00:05:37.244 ************************************ 00:05:37.244 06:44:41 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:37.244 * Looking for test storage... 00:05:37.244 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:37.244 06:44:41 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:37.244 06:44:41 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:37.244 06:44:41 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:37.244 06:44:41 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:37.244 06:44:41 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:37.244 06:44:41 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:37.244 06:44:41 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:37.244 06:44:41 -- scripts/common.sh@335 -- # IFS=.-: 00:05:37.244 06:44:41 -- scripts/common.sh@335 -- # read -ra ver1 00:05:37.244 06:44:41 -- scripts/common.sh@336 -- # IFS=.-: 00:05:37.244 06:44:41 -- scripts/common.sh@336 -- # read -ra ver2 00:05:37.244 06:44:41 -- scripts/common.sh@337 -- # local 'op=<' 00:05:37.244 06:44:41 -- scripts/common.sh@339 -- # ver1_l=2 00:05:37.244 06:44:41 -- scripts/common.sh@340 -- # ver2_l=1 00:05:37.244 06:44:41 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:37.244 06:44:41 -- scripts/common.sh@343 -- # case "$op" in 00:05:37.244 06:44:41 -- scripts/common.sh@344 -- # : 1 00:05:37.244 06:44:41 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:37.244 06:44:41 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:37.244 06:44:41 -- scripts/common.sh@364 -- # decimal 1 00:05:37.244 06:44:41 -- scripts/common.sh@352 -- # local d=1 00:05:37.244 06:44:41 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:37.244 06:44:41 -- scripts/common.sh@354 -- # echo 1 00:05:37.244 06:44:41 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:37.244 06:44:41 -- scripts/common.sh@365 -- # decimal 2 00:05:37.244 06:44:41 -- scripts/common.sh@352 -- # local d=2 00:05:37.244 06:44:41 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:37.244 06:44:41 -- scripts/common.sh@354 -- # echo 2 00:05:37.244 06:44:41 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:37.244 06:44:41 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:37.244 06:44:41 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:37.244 06:44:41 -- scripts/common.sh@367 -- # return 0 00:05:37.244 06:44:41 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:37.244 06:44:41 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:37.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.244 --rc genhtml_branch_coverage=1 00:05:37.244 --rc genhtml_function_coverage=1 00:05:37.244 --rc genhtml_legend=1 00:05:37.244 --rc geninfo_all_blocks=1 00:05:37.244 --rc geninfo_unexecuted_blocks=1 00:05:37.244 00:05:37.244 ' 00:05:37.244 06:44:41 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:37.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.244 --rc genhtml_branch_coverage=1 00:05:37.244 --rc genhtml_function_coverage=1 00:05:37.244 --rc genhtml_legend=1 00:05:37.244 --rc geninfo_all_blocks=1 00:05:37.244 --rc geninfo_unexecuted_blocks=1 00:05:37.244 00:05:37.244 ' 00:05:37.244 06:44:41 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:37.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.244 --rc genhtml_branch_coverage=1 00:05:37.244 --rc genhtml_function_coverage=1 00:05:37.244 --rc genhtml_legend=1 00:05:37.244 --rc geninfo_all_blocks=1 00:05:37.244 --rc geninfo_unexecuted_blocks=1 00:05:37.244 00:05:37.244 ' 00:05:37.244 06:44:41 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:37.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.244 --rc genhtml_branch_coverage=1 00:05:37.244 --rc genhtml_function_coverage=1 00:05:37.244 --rc genhtml_legend=1 00:05:37.244 --rc geninfo_all_blocks=1 00:05:37.244 --rc geninfo_unexecuted_blocks=1 00:05:37.244 00:05:37.244 ' 00:05:37.244 06:44:41 -- setup/driver.sh@68 -- # setup reset 00:05:37.245 06:44:41 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:37.245 06:44:41 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:37.813 06:44:42 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:37.813 06:44:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:37.813 06:44:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:37.813 06:44:42 -- common/autotest_common.sh@10 -- # set +x 00:05:37.813 ************************************ 00:05:37.813 START TEST guess_driver 00:05:37.813 ************************************ 00:05:37.813 06:44:42 -- common/autotest_common.sh@1114 -- # guess_driver 00:05:37.813 06:44:42 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:37.813 06:44:42 -- setup/driver.sh@47 -- # local fail=0 00:05:37.813 06:44:42 -- setup/driver.sh@49 -- # pick_driver 00:05:37.813 06:44:42 -- setup/driver.sh@36 -- # vfio 00:05:37.813 06:44:42 -- setup/driver.sh@21 -- # local iommu_grups 00:05:37.813 06:44:42 -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:37.813 06:44:42 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:37.813 06:44:42 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:37.813 06:44:42 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:05:37.813 06:44:42 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:05:37.813 06:44:42 -- setup/driver.sh@32 -- # return 1 00:05:37.813 06:44:42 -- setup/driver.sh@38 -- # uio 00:05:37.813 06:44:42 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:05:37.813 06:44:42 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:05:37.813 06:44:42 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:05:37.813 06:44:42 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:05:37.813 06:44:42 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio.ko.xz 00:05:37.813 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:05:37.813 06:44:42 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:05:37.813 06:44:42 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:05:37.813 06:44:42 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:37.813 Looking for driver=uio_pci_generic 00:05:37.813 06:44:42 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:05:37.813 06:44:42 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:37.813 06:44:42 -- setup/driver.sh@45 -- # setup output config 00:05:37.813 06:44:42 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:37.813 06:44:42 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:38.381 06:44:42 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:05:38.381 06:44:42 -- setup/driver.sh@58 -- # continue 00:05:38.381 06:44:42 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:38.640 06:44:43 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:38.640 06:44:43 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:38.640 06:44:43 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:38.640 06:44:43 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:38.640 06:44:43 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:38.640 06:44:43 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:38.640 06:44:43 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:38.640 06:44:43 -- setup/driver.sh@65 -- # setup reset 00:05:38.640 06:44:43 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:38.640 06:44:43 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:39.207 00:05:39.207 real 0m1.429s 00:05:39.207 user 0m0.561s 00:05:39.207 sys 0m0.861s 00:05:39.207 06:44:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:39.207 06:44:43 -- common/autotest_common.sh@10 -- # set +x 00:05:39.207 ************************************ 00:05:39.207 END TEST guess_driver 00:05:39.207 ************************************ 00:05:39.466 00:05:39.466 real 0m2.208s 00:05:39.466 user 0m0.870s 00:05:39.466 sys 0m1.392s 00:05:39.466 06:44:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:39.466 06:44:43 -- common/autotest_common.sh@10 -- # set +x 00:05:39.466 ************************************ 00:05:39.466 END TEST driver 00:05:39.466 ************************************ 00:05:39.466 06:44:43 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:39.466 06:44:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:39.466 06:44:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:39.466 06:44:43 -- common/autotest_common.sh@10 -- # set +x 00:05:39.466 ************************************ 00:05:39.466 START TEST devices 00:05:39.466 ************************************ 00:05:39.466 06:44:43 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:39.466 * Looking for test storage... 00:05:39.466 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:39.466 06:44:43 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:39.466 06:44:43 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:39.466 06:44:43 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:39.466 06:44:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:39.466 06:44:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:39.466 06:44:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:39.466 06:44:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:39.466 06:44:43 -- scripts/common.sh@335 -- # IFS=.-: 00:05:39.466 06:44:43 -- scripts/common.sh@335 -- # read -ra ver1 00:05:39.466 06:44:43 -- scripts/common.sh@336 -- # IFS=.-: 00:05:39.466 06:44:43 -- scripts/common.sh@336 -- # read -ra ver2 00:05:39.466 06:44:43 -- scripts/common.sh@337 -- # local 'op=<' 00:05:39.466 06:44:43 -- scripts/common.sh@339 -- # ver1_l=2 00:05:39.466 06:44:43 -- scripts/common.sh@340 -- # ver2_l=1 00:05:39.466 06:44:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:39.466 06:44:43 -- scripts/common.sh@343 -- # case "$op" in 00:05:39.466 06:44:43 -- scripts/common.sh@344 -- # : 1 00:05:39.467 06:44:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:39.467 06:44:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:39.467 06:44:43 -- scripts/common.sh@364 -- # decimal 1 00:05:39.467 06:44:43 -- scripts/common.sh@352 -- # local d=1 00:05:39.467 06:44:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:39.726 06:44:43 -- scripts/common.sh@354 -- # echo 1 00:05:39.726 06:44:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:39.726 06:44:43 -- scripts/common.sh@365 -- # decimal 2 00:05:39.726 06:44:43 -- scripts/common.sh@352 -- # local d=2 00:05:39.726 06:44:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:39.726 06:44:43 -- scripts/common.sh@354 -- # echo 2 00:05:39.726 06:44:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:39.726 06:44:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:39.726 06:44:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:39.726 06:44:43 -- scripts/common.sh@367 -- # return 0 00:05:39.726 06:44:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:39.726 06:44:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:39.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.726 --rc genhtml_branch_coverage=1 00:05:39.726 --rc genhtml_function_coverage=1 00:05:39.726 --rc genhtml_legend=1 00:05:39.726 --rc geninfo_all_blocks=1 00:05:39.726 --rc geninfo_unexecuted_blocks=1 00:05:39.726 00:05:39.726 ' 00:05:39.726 06:44:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:39.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.726 --rc genhtml_branch_coverage=1 00:05:39.726 --rc genhtml_function_coverage=1 00:05:39.726 --rc genhtml_legend=1 00:05:39.726 --rc geninfo_all_blocks=1 00:05:39.726 --rc geninfo_unexecuted_blocks=1 00:05:39.726 00:05:39.726 ' 00:05:39.726 06:44:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:39.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.726 --rc genhtml_branch_coverage=1 00:05:39.726 --rc genhtml_function_coverage=1 00:05:39.726 --rc genhtml_legend=1 00:05:39.726 --rc geninfo_all_blocks=1 00:05:39.726 --rc geninfo_unexecuted_blocks=1 00:05:39.726 00:05:39.726 ' 00:05:39.726 06:44:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:39.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.726 --rc genhtml_branch_coverage=1 00:05:39.726 --rc genhtml_function_coverage=1 00:05:39.726 --rc genhtml_legend=1 00:05:39.726 --rc geninfo_all_blocks=1 00:05:39.726 --rc geninfo_unexecuted_blocks=1 00:05:39.726 00:05:39.726 ' 00:05:39.726 06:44:43 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:39.726 06:44:43 -- setup/devices.sh@192 -- # setup reset 00:05:39.726 06:44:43 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:39.726 06:44:43 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:40.294 06:44:44 -- setup/devices.sh@194 -- # get_zoned_devs 00:05:40.294 06:44:44 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:05:40.294 06:44:44 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:05:40.294 06:44:44 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:05:40.294 06:44:44 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:40.294 06:44:44 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:05:40.294 06:44:44 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:05:40.294 06:44:44 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:40.294 06:44:44 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:40.294 06:44:44 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:40.294 06:44:44 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:05:40.294 06:44:44 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:05:40.294 06:44:44 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:40.294 06:44:44 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:40.294 06:44:44 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:40.294 06:44:44 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:05:40.294 06:44:44 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:05:40.294 06:44:44 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:40.294 06:44:44 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:40.294 06:44:44 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:40.294 06:44:44 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:05:40.294 06:44:44 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:05:40.294 06:44:44 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:40.295 06:44:44 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:40.295 06:44:44 -- setup/devices.sh@196 -- # blocks=() 00:05:40.295 06:44:44 -- setup/devices.sh@196 -- # declare -a blocks 00:05:40.295 06:44:44 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:40.295 06:44:44 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:40.295 06:44:44 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:40.295 06:44:44 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:40.295 06:44:44 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:40.295 06:44:44 -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:40.295 06:44:44 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:05:40.295 06:44:44 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:05:40.295 06:44:44 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:40.295 06:44:44 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:05:40.295 06:44:44 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:05:40.295 No valid GPT data, bailing 00:05:40.295 06:44:44 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:40.295 06:44:44 -- scripts/common.sh@393 -- # pt= 00:05:40.295 06:44:44 -- scripts/common.sh@394 -- # return 1 00:05:40.295 06:44:44 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:40.295 06:44:44 -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:40.295 06:44:44 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:40.295 06:44:44 -- setup/common.sh@80 -- # echo 5368709120 00:05:40.295 06:44:44 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:05:40.295 06:44:44 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:40.295 06:44:44 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:05:40.295 06:44:44 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:40.295 06:44:44 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:05:40.295 06:44:44 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:40.295 06:44:44 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:40.295 06:44:44 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:40.295 06:44:44 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:05:40.295 06:44:44 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:05:40.295 06:44:44 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:05:40.554 No valid GPT data, bailing 00:05:40.554 06:44:44 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:40.554 06:44:44 -- scripts/common.sh@393 -- # pt= 00:05:40.554 06:44:44 -- scripts/common.sh@394 -- # return 1 00:05:40.554 06:44:44 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:05:40.554 06:44:44 -- setup/common.sh@76 -- # local dev=nvme1n1 00:05:40.554 06:44:44 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:05:40.554 06:44:44 -- setup/common.sh@80 -- # echo 4294967296 00:05:40.554 06:44:44 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:40.554 06:44:44 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:40.554 06:44:44 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:40.554 06:44:44 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:40.554 06:44:44 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:05:40.554 06:44:44 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:40.554 06:44:44 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:40.554 06:44:44 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:40.554 06:44:44 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:05:40.554 06:44:44 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:05:40.554 06:44:44 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:05:40.554 No valid GPT data, bailing 00:05:40.554 06:44:44 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:40.554 06:44:44 -- scripts/common.sh@393 -- # pt= 00:05:40.554 06:44:44 -- scripts/common.sh@394 -- # return 1 00:05:40.554 06:44:44 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:05:40.554 06:44:44 -- setup/common.sh@76 -- # local dev=nvme1n2 00:05:40.554 06:44:44 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:05:40.554 06:44:44 -- setup/common.sh@80 -- # echo 4294967296 00:05:40.555 06:44:44 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:40.555 06:44:44 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:40.555 06:44:44 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:40.555 06:44:44 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:40.555 06:44:44 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:05:40.555 06:44:44 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:40.555 06:44:44 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:40.555 06:44:44 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:40.555 06:44:44 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:05:40.555 06:44:44 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:05:40.555 06:44:44 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:05:40.555 No valid GPT data, bailing 00:05:40.555 06:44:45 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:40.555 06:44:45 -- scripts/common.sh@393 -- # pt= 00:05:40.555 06:44:45 -- scripts/common.sh@394 -- # return 1 00:05:40.555 06:44:45 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:05:40.555 06:44:45 -- setup/common.sh@76 -- # local dev=nvme1n3 00:05:40.555 06:44:45 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:05:40.555 06:44:45 -- setup/common.sh@80 -- # echo 4294967296 00:05:40.555 06:44:45 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:40.555 06:44:45 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:40.555 06:44:45 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:40.555 06:44:45 -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:05:40.555 06:44:45 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:40.555 06:44:45 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:40.555 06:44:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:40.555 06:44:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:40.555 06:44:45 -- common/autotest_common.sh@10 -- # set +x 00:05:40.555 ************************************ 00:05:40.555 START TEST nvme_mount 00:05:40.555 ************************************ 00:05:40.555 06:44:45 -- common/autotest_common.sh@1114 -- # nvme_mount 00:05:40.555 06:44:45 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:40.555 06:44:45 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:40.555 06:44:45 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:40.555 06:44:45 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:40.555 06:44:45 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:40.555 06:44:45 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:40.555 06:44:45 -- setup/common.sh@40 -- # local part_no=1 00:05:40.555 06:44:45 -- setup/common.sh@41 -- # local size=1073741824 00:05:40.555 06:44:45 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:40.555 06:44:45 -- setup/common.sh@44 -- # parts=() 00:05:40.555 06:44:45 -- setup/common.sh@44 -- # local parts 00:05:40.555 06:44:45 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:40.555 06:44:45 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:40.555 06:44:45 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:40.555 06:44:45 -- setup/common.sh@46 -- # (( part++ )) 00:05:40.555 06:44:45 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:40.555 06:44:45 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:40.555 06:44:45 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:40.555 06:44:45 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:41.932 Creating new GPT entries in memory. 00:05:41.932 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:41.932 other utilities. 00:05:41.932 06:44:46 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:41.932 06:44:46 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:41.932 06:44:46 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:41.932 06:44:46 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:41.932 06:44:46 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:42.867 Creating new GPT entries in memory. 00:05:42.867 The operation has completed successfully. 00:05:42.867 06:44:47 -- setup/common.sh@57 -- # (( part++ )) 00:05:42.867 06:44:47 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:42.868 06:44:47 -- setup/common.sh@62 -- # wait 64159 00:05:42.868 06:44:47 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:42.868 06:44:47 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:42.868 06:44:47 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:42.868 06:44:47 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:42.868 06:44:47 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:42.868 06:44:47 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:42.868 06:44:47 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:42.868 06:44:47 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:42.868 06:44:47 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:42.868 06:44:47 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:42.868 06:44:47 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:42.868 06:44:47 -- setup/devices.sh@53 -- # local found=0 00:05:42.868 06:44:47 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:42.868 06:44:47 -- setup/devices.sh@56 -- # : 00:05:42.868 06:44:47 -- setup/devices.sh@59 -- # local pci status 00:05:42.868 06:44:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.868 06:44:47 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:42.868 06:44:47 -- setup/devices.sh@47 -- # setup output config 00:05:42.868 06:44:47 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:42.868 06:44:47 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:42.868 06:44:47 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:42.868 06:44:47 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:42.868 06:44:47 -- setup/devices.sh@63 -- # found=1 00:05:42.868 06:44:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.868 06:44:47 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:42.868 06:44:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.435 06:44:47 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:43.435 06:44:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.435 06:44:47 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:43.435 06:44:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.435 06:44:47 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:43.435 06:44:47 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:43.435 06:44:47 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:43.435 06:44:47 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:43.435 06:44:47 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:43.435 06:44:47 -- setup/devices.sh@110 -- # cleanup_nvme 00:05:43.435 06:44:47 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:43.435 06:44:47 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:43.435 06:44:47 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:43.435 06:44:47 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:43.435 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:43.435 06:44:47 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:43.435 06:44:47 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:43.695 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:43.695 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:43.695 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:43.695 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:43.695 06:44:48 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:43.695 06:44:48 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:43.695 06:44:48 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:43.695 06:44:48 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:43.695 06:44:48 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:43.695 06:44:48 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:43.695 06:44:48 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:43.695 06:44:48 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:43.695 06:44:48 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:43.695 06:44:48 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:43.695 06:44:48 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:43.695 06:44:48 -- setup/devices.sh@53 -- # local found=0 00:05:43.695 06:44:48 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:43.695 06:44:48 -- setup/devices.sh@56 -- # : 00:05:43.695 06:44:48 -- setup/devices.sh@59 -- # local pci status 00:05:43.695 06:44:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.695 06:44:48 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:43.695 06:44:48 -- setup/devices.sh@47 -- # setup output config 00:05:43.695 06:44:48 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:43.695 06:44:48 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:43.954 06:44:48 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:43.954 06:44:48 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:43.954 06:44:48 -- setup/devices.sh@63 -- # found=1 00:05:43.954 06:44:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.954 06:44:48 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:43.954 06:44:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.213 06:44:48 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:44.213 06:44:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.213 06:44:48 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:44.213 06:44:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.472 06:44:48 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:44.472 06:44:48 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:44.472 06:44:48 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:44.472 06:44:48 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:44.472 06:44:48 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:44.472 06:44:48 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:44.472 06:44:48 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:05:44.472 06:44:48 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:44.472 06:44:48 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:44.472 06:44:48 -- setup/devices.sh@50 -- # local mount_point= 00:05:44.472 06:44:48 -- setup/devices.sh@51 -- # local test_file= 00:05:44.472 06:44:48 -- setup/devices.sh@53 -- # local found=0 00:05:44.472 06:44:48 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:44.472 06:44:48 -- setup/devices.sh@59 -- # local pci status 00:05:44.472 06:44:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.472 06:44:48 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:44.472 06:44:48 -- setup/devices.sh@47 -- # setup output config 00:05:44.472 06:44:48 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:44.472 06:44:48 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:44.731 06:44:49 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:44.731 06:44:49 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:44.731 06:44:49 -- setup/devices.sh@63 -- # found=1 00:05:44.731 06:44:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.731 06:44:49 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:44.731 06:44:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.990 06:44:49 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:44.990 06:44:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.990 06:44:49 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:44.990 06:44:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.990 06:44:49 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:44.990 06:44:49 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:44.990 06:44:49 -- setup/devices.sh@68 -- # return 0 00:05:44.990 06:44:49 -- setup/devices.sh@128 -- # cleanup_nvme 00:05:44.990 06:44:49 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:44.990 06:44:49 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:44.990 06:44:49 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:44.990 06:44:49 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:44.990 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:44.990 00:05:44.990 real 0m4.450s 00:05:44.990 user 0m0.987s 00:05:44.990 sys 0m1.153s 00:05:44.990 06:44:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:44.990 06:44:49 -- common/autotest_common.sh@10 -- # set +x 00:05:44.990 ************************************ 00:05:44.990 END TEST nvme_mount 00:05:44.990 ************************************ 00:05:45.250 06:44:49 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:45.250 06:44:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:45.250 06:44:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:45.250 06:44:49 -- common/autotest_common.sh@10 -- # set +x 00:05:45.250 ************************************ 00:05:45.250 START TEST dm_mount 00:05:45.250 ************************************ 00:05:45.250 06:44:49 -- common/autotest_common.sh@1114 -- # dm_mount 00:05:45.250 06:44:49 -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:45.250 06:44:49 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:45.250 06:44:49 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:45.250 06:44:49 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:45.250 06:44:49 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:45.250 06:44:49 -- setup/common.sh@40 -- # local part_no=2 00:05:45.250 06:44:49 -- setup/common.sh@41 -- # local size=1073741824 00:05:45.250 06:44:49 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:45.250 06:44:49 -- setup/common.sh@44 -- # parts=() 00:05:45.250 06:44:49 -- setup/common.sh@44 -- # local parts 00:05:45.250 06:44:49 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:45.250 06:44:49 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:45.250 06:44:49 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:45.250 06:44:49 -- setup/common.sh@46 -- # (( part++ )) 00:05:45.250 06:44:49 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:45.250 06:44:49 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:45.250 06:44:49 -- setup/common.sh@46 -- # (( part++ )) 00:05:45.250 06:44:49 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:45.250 06:44:49 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:45.250 06:44:49 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:45.250 06:44:49 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:46.190 Creating new GPT entries in memory. 00:05:46.190 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:46.190 other utilities. 00:05:46.190 06:44:50 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:46.190 06:44:50 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:46.190 06:44:50 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:46.190 06:44:50 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:46.190 06:44:50 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:47.188 Creating new GPT entries in memory. 00:05:47.188 The operation has completed successfully. 00:05:47.188 06:44:51 -- setup/common.sh@57 -- # (( part++ )) 00:05:47.188 06:44:51 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:47.188 06:44:51 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:47.188 06:44:51 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:47.188 06:44:51 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:05:48.123 The operation has completed successfully. 00:05:48.123 06:44:52 -- setup/common.sh@57 -- # (( part++ )) 00:05:48.123 06:44:52 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:48.123 06:44:52 -- setup/common.sh@62 -- # wait 64618 00:05:48.381 06:44:52 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:48.381 06:44:52 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:48.381 06:44:52 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:48.381 06:44:52 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:48.381 06:44:52 -- setup/devices.sh@160 -- # for t in {1..5} 00:05:48.381 06:44:52 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:48.381 06:44:52 -- setup/devices.sh@161 -- # break 00:05:48.381 06:44:52 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:48.381 06:44:52 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:48.381 06:44:52 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:48.381 06:44:52 -- setup/devices.sh@166 -- # dm=dm-0 00:05:48.381 06:44:52 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:48.381 06:44:52 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:48.381 06:44:52 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:48.381 06:44:52 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:48.381 06:44:52 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:48.381 06:44:52 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:48.381 06:44:52 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:48.381 06:44:52 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:48.381 06:44:52 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:48.381 06:44:52 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:48.381 06:44:52 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:48.381 06:44:52 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:48.381 06:44:52 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:48.381 06:44:52 -- setup/devices.sh@53 -- # local found=0 00:05:48.381 06:44:52 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:48.381 06:44:52 -- setup/devices.sh@56 -- # : 00:05:48.381 06:44:52 -- setup/devices.sh@59 -- # local pci status 00:05:48.381 06:44:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.381 06:44:52 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:48.381 06:44:52 -- setup/devices.sh@47 -- # setup output config 00:05:48.381 06:44:52 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:48.381 06:44:52 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:48.381 06:44:52 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:48.381 06:44:52 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:48.382 06:44:52 -- setup/devices.sh@63 -- # found=1 00:05:48.382 06:44:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.382 06:44:52 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:48.382 06:44:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.949 06:44:53 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:48.949 06:44:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.949 06:44:53 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:48.949 06:44:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.949 06:44:53 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:48.949 06:44:53 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:48.949 06:44:53 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:48.949 06:44:53 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:48.949 06:44:53 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:48.949 06:44:53 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:48.949 06:44:53 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:48.949 06:44:53 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:48.949 06:44:53 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:48.949 06:44:53 -- setup/devices.sh@50 -- # local mount_point= 00:05:48.949 06:44:53 -- setup/devices.sh@51 -- # local test_file= 00:05:48.949 06:44:53 -- setup/devices.sh@53 -- # local found=0 00:05:48.949 06:44:53 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:48.949 06:44:53 -- setup/devices.sh@59 -- # local pci status 00:05:48.949 06:44:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.949 06:44:53 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:48.949 06:44:53 -- setup/devices.sh@47 -- # setup output config 00:05:48.949 06:44:53 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:48.949 06:44:53 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:49.208 06:44:53 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:49.208 06:44:53 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:49.208 06:44:53 -- setup/devices.sh@63 -- # found=1 00:05:49.208 06:44:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.208 06:44:53 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:49.208 06:44:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.466 06:44:53 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:49.466 06:44:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.466 06:44:53 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:49.466 06:44:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.725 06:44:54 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:49.725 06:44:54 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:49.725 06:44:54 -- setup/devices.sh@68 -- # return 0 00:05:49.725 06:44:54 -- setup/devices.sh@187 -- # cleanup_dm 00:05:49.725 06:44:54 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:49.725 06:44:54 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:49.725 06:44:54 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:49.725 06:44:54 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:49.725 06:44:54 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:49.725 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:49.725 06:44:54 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:49.725 06:44:54 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:49.725 00:05:49.725 real 0m4.521s 00:05:49.725 user 0m0.694s 00:05:49.725 sys 0m0.754s 00:05:49.725 06:44:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:49.725 06:44:54 -- common/autotest_common.sh@10 -- # set +x 00:05:49.725 ************************************ 00:05:49.725 END TEST dm_mount 00:05:49.725 ************************************ 00:05:49.725 06:44:54 -- setup/devices.sh@1 -- # cleanup 00:05:49.725 06:44:54 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:49.725 06:44:54 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:49.725 06:44:54 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:49.725 06:44:54 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:49.725 06:44:54 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:49.725 06:44:54 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:49.984 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:49.984 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:49.984 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:49.984 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:49.984 06:44:54 -- setup/devices.sh@12 -- # cleanup_dm 00:05:49.984 06:44:54 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:49.984 06:44:54 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:49.984 06:44:54 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:49.984 06:44:54 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:49.984 06:44:54 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:49.984 06:44:54 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:49.984 00:05:49.984 real 0m10.618s 00:05:49.984 user 0m2.433s 00:05:49.984 sys 0m2.518s 00:05:49.984 06:44:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:49.984 06:44:54 -- common/autotest_common.sh@10 -- # set +x 00:05:49.984 ************************************ 00:05:49.984 END TEST devices 00:05:49.984 ************************************ 00:05:49.984 00:05:49.984 real 0m22.327s 00:05:49.984 user 0m7.781s 00:05:49.984 sys 0m8.858s 00:05:49.984 06:44:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:49.984 06:44:54 -- common/autotest_common.sh@10 -- # set +x 00:05:49.984 ************************************ 00:05:49.984 END TEST setup.sh 00:05:49.984 ************************************ 00:05:49.984 06:44:54 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:50.243 Hugepages 00:05:50.243 node hugesize free / total 00:05:50.243 node0 1048576kB 0 / 0 00:05:50.243 node0 2048kB 2048 / 2048 00:05:50.243 00:05:50.243 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:50.243 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:50.501 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:50.501 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:50.501 06:44:54 -- spdk/autotest.sh@128 -- # uname -s 00:05:50.501 06:44:54 -- spdk/autotest.sh@128 -- # [[ Linux == Linux ]] 00:05:50.501 06:44:54 -- spdk/autotest.sh@130 -- # nvme_namespace_revert 00:05:50.501 06:44:54 -- common/autotest_common.sh@1526 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:51.073 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:51.332 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:51.332 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:51.332 06:44:55 -- common/autotest_common.sh@1527 -- # sleep 1 00:05:52.268 06:44:56 -- common/autotest_common.sh@1528 -- # bdfs=() 00:05:52.268 06:44:56 -- common/autotest_common.sh@1528 -- # local bdfs 00:05:52.268 06:44:56 -- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs)) 00:05:52.268 06:44:56 -- common/autotest_common.sh@1529 -- # get_nvme_bdfs 00:05:52.268 06:44:56 -- common/autotest_common.sh@1508 -- # bdfs=() 00:05:52.268 06:44:56 -- common/autotest_common.sh@1508 -- # local bdfs 00:05:52.268 06:44:56 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:52.268 06:44:56 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:52.268 06:44:56 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:05:52.527 06:44:56 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:05:52.527 06:44:56 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:05:52.527 06:44:56 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:52.786 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:52.786 Waiting for block devices as requested 00:05:52.786 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:05:52.786 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:05:53.044 06:44:57 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:05:53.044 06:44:57 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:05:53.044 06:44:57 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:53.044 06:44:57 -- common/autotest_common.sh@1497 -- # grep 0000:00:06.0/nvme/nvme 00:05:53.044 06:44:57 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:53.044 06:44:57 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:05:53.044 06:44:57 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:53.044 06:44:57 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0 00:05:53.044 06:44:57 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0 00:05:53.044 06:44:57 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]] 00:05:53.044 06:44:57 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:53.044 06:44:57 -- common/autotest_common.sh@1540 -- # grep oacs 00:05:53.044 06:44:57 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:53.044 06:44:57 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:05:53.044 06:44:57 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:05:53.044 06:44:57 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:05:53.044 06:44:57 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0 00:05:53.044 06:44:57 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:05:53.044 06:44:57 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:05:53.044 06:44:57 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:05:53.044 06:44:57 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:05:53.044 06:44:57 -- common/autotest_common.sh@1552 -- # continue 00:05:53.045 06:44:57 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:05:53.045 06:44:57 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:05:53.045 06:44:57 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:53.045 06:44:57 -- common/autotest_common.sh@1497 -- # grep 0000:00:07.0/nvme/nvme 00:05:53.045 06:44:57 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:05:53.045 06:44:57 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 ]] 00:05:53.045 06:44:57 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:05:53.045 06:44:57 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme1 00:05:53.045 06:44:57 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme1 00:05:53.045 06:44:57 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme1 ]] 00:05:53.045 06:44:57 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:53.045 06:44:57 -- common/autotest_common.sh@1540 -- # grep oacs 00:05:53.045 06:44:57 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:53.045 06:44:57 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:05:53.045 06:44:57 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:05:53.045 06:44:57 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:05:53.045 06:44:57 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme1 00:05:53.045 06:44:57 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:05:53.045 06:44:57 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:05:53.045 06:44:57 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:05:53.045 06:44:57 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:05:53.045 06:44:57 -- common/autotest_common.sh@1552 -- # continue 00:05:53.045 06:44:57 -- spdk/autotest.sh@133 -- # timing_exit pre_cleanup 00:05:53.045 06:44:57 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:53.045 06:44:57 -- common/autotest_common.sh@10 -- # set +x 00:05:53.045 06:44:57 -- spdk/autotest.sh@136 -- # timing_enter afterboot 00:05:53.045 06:44:57 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:53.045 06:44:57 -- common/autotest_common.sh@10 -- # set +x 00:05:53.045 06:44:57 -- spdk/autotest.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:53.612 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:53.872 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:53.872 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:53.872 06:44:58 -- spdk/autotest.sh@138 -- # timing_exit afterboot 00:05:53.872 06:44:58 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:53.872 06:44:58 -- common/autotest_common.sh@10 -- # set +x 00:05:54.131 06:44:58 -- spdk/autotest.sh@142 -- # opal_revert_cleanup 00:05:54.131 06:44:58 -- common/autotest_common.sh@1586 -- # mapfile -t bdfs 00:05:54.131 06:44:58 -- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54 00:05:54.131 06:44:58 -- common/autotest_common.sh@1572 -- # bdfs=() 00:05:54.131 06:44:58 -- common/autotest_common.sh@1572 -- # local bdfs 00:05:54.131 06:44:58 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs 00:05:54.131 06:44:58 -- common/autotest_common.sh@1508 -- # bdfs=() 00:05:54.131 06:44:58 -- common/autotest_common.sh@1508 -- # local bdfs 00:05:54.131 06:44:58 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:54.131 06:44:58 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:54.131 06:44:58 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:05:54.131 06:44:58 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:05:54.131 06:44:58 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:05:54.131 06:44:58 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:05:54.131 06:44:58 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:05:54.131 06:44:58 -- common/autotest_common.sh@1575 -- # device=0x0010 00:05:54.131 06:44:58 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:54.131 06:44:58 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:05:54.131 06:44:58 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:05:54.131 06:44:58 -- common/autotest_common.sh@1575 -- # device=0x0010 00:05:54.131 06:44:58 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:54.131 06:44:58 -- common/autotest_common.sh@1581 -- # printf '%s\n' 00:05:54.131 06:44:58 -- common/autotest_common.sh@1587 -- # [[ -z '' ]] 00:05:54.131 06:44:58 -- common/autotest_common.sh@1588 -- # return 0 00:05:54.131 06:44:58 -- spdk/autotest.sh@148 -- # '[' 0 -eq 1 ']' 00:05:54.131 06:44:58 -- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']' 00:05:54.131 06:44:58 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:05:54.131 06:44:58 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:05:54.131 06:44:58 -- spdk/autotest.sh@160 -- # timing_enter lib 00:05:54.131 06:44:58 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:54.131 06:44:58 -- common/autotest_common.sh@10 -- # set +x 00:05:54.131 06:44:58 -- spdk/autotest.sh@162 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:54.131 06:44:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:54.131 06:44:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:54.131 06:44:58 -- common/autotest_common.sh@10 -- # set +x 00:05:54.131 ************************************ 00:05:54.131 START TEST env 00:05:54.131 ************************************ 00:05:54.131 06:44:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:54.131 * Looking for test storage... 00:05:54.131 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:54.131 06:44:58 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:54.131 06:44:58 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:54.131 06:44:58 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:54.391 06:44:58 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:54.391 06:44:58 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:54.391 06:44:58 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:54.391 06:44:58 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:54.391 06:44:58 -- scripts/common.sh@335 -- # IFS=.-: 00:05:54.391 06:44:58 -- scripts/common.sh@335 -- # read -ra ver1 00:05:54.391 06:44:58 -- scripts/common.sh@336 -- # IFS=.-: 00:05:54.391 06:44:58 -- scripts/common.sh@336 -- # read -ra ver2 00:05:54.391 06:44:58 -- scripts/common.sh@337 -- # local 'op=<' 00:05:54.391 06:44:58 -- scripts/common.sh@339 -- # ver1_l=2 00:05:54.391 06:44:58 -- scripts/common.sh@340 -- # ver2_l=1 00:05:54.391 06:44:58 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:54.391 06:44:58 -- scripts/common.sh@343 -- # case "$op" in 00:05:54.391 06:44:58 -- scripts/common.sh@344 -- # : 1 00:05:54.391 06:44:58 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:54.391 06:44:58 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:54.391 06:44:58 -- scripts/common.sh@364 -- # decimal 1 00:05:54.391 06:44:58 -- scripts/common.sh@352 -- # local d=1 00:05:54.391 06:44:58 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:54.391 06:44:58 -- scripts/common.sh@354 -- # echo 1 00:05:54.391 06:44:58 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:54.391 06:44:58 -- scripts/common.sh@365 -- # decimal 2 00:05:54.391 06:44:58 -- scripts/common.sh@352 -- # local d=2 00:05:54.391 06:44:58 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:54.391 06:44:58 -- scripts/common.sh@354 -- # echo 2 00:05:54.391 06:44:58 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:54.391 06:44:58 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:54.391 06:44:58 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:54.391 06:44:58 -- scripts/common.sh@367 -- # return 0 00:05:54.391 06:44:58 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:54.391 06:44:58 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:54.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.391 --rc genhtml_branch_coverage=1 00:05:54.391 --rc genhtml_function_coverage=1 00:05:54.391 --rc genhtml_legend=1 00:05:54.391 --rc geninfo_all_blocks=1 00:05:54.391 --rc geninfo_unexecuted_blocks=1 00:05:54.391 00:05:54.391 ' 00:05:54.391 06:44:58 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:54.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.391 --rc genhtml_branch_coverage=1 00:05:54.391 --rc genhtml_function_coverage=1 00:05:54.391 --rc genhtml_legend=1 00:05:54.391 --rc geninfo_all_blocks=1 00:05:54.391 --rc geninfo_unexecuted_blocks=1 00:05:54.391 00:05:54.391 ' 00:05:54.391 06:44:58 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:54.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.391 --rc genhtml_branch_coverage=1 00:05:54.391 --rc genhtml_function_coverage=1 00:05:54.391 --rc genhtml_legend=1 00:05:54.391 --rc geninfo_all_blocks=1 00:05:54.391 --rc geninfo_unexecuted_blocks=1 00:05:54.391 00:05:54.391 ' 00:05:54.391 06:44:58 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:54.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.391 --rc genhtml_branch_coverage=1 00:05:54.391 --rc genhtml_function_coverage=1 00:05:54.391 --rc genhtml_legend=1 00:05:54.391 --rc geninfo_all_blocks=1 00:05:54.391 --rc geninfo_unexecuted_blocks=1 00:05:54.391 00:05:54.391 ' 00:05:54.391 06:44:58 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:54.391 06:44:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:54.391 06:44:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:54.391 06:44:58 -- common/autotest_common.sh@10 -- # set +x 00:05:54.391 ************************************ 00:05:54.391 START TEST env_memory 00:05:54.391 ************************************ 00:05:54.391 06:44:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:54.391 00:05:54.391 00:05:54.391 CUnit - A unit testing framework for C - Version 2.1-3 00:05:54.391 http://cunit.sourceforge.net/ 00:05:54.391 00:05:54.391 00:05:54.391 Suite: memory 00:05:54.391 Test: alloc and free memory map ...[2024-12-13 06:44:58.741850] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:54.391 passed 00:05:54.391 Test: mem map translation ...[2024-12-13 06:44:58.781341] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:54.391 [2024-12-13 06:44:58.782641] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:54.391 [2024-12-13 06:44:58.782757] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:54.391 [2024-12-13 06:44:58.782776] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:54.391 passed 00:05:54.391 Test: mem map registration ...[2024-12-13 06:44:58.854273] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:54.391 [2024-12-13 06:44:58.854331] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:54.391 passed 00:05:54.651 Test: mem map adjacent registrations ...passed 00:05:54.651 00:05:54.651 Run Summary: Type Total Ran Passed Failed Inactive 00:05:54.651 suites 1 1 n/a 0 0 00:05:54.651 tests 4 4 4 0 0 00:05:54.651 asserts 152 152 152 0 n/a 00:05:54.651 00:05:54.651 Elapsed time = 0.235 seconds 00:05:54.651 ************************************ 00:05:54.651 END TEST env_memory 00:05:54.651 ************************************ 00:05:54.651 00:05:54.651 real 0m0.259s 00:05:54.651 user 0m0.235s 00:05:54.651 sys 0m0.012s 00:05:54.651 06:44:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:54.651 06:44:58 -- common/autotest_common.sh@10 -- # set +x 00:05:54.651 06:44:58 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:54.651 06:44:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:54.651 06:44:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:54.651 06:44:58 -- common/autotest_common.sh@10 -- # set +x 00:05:54.651 ************************************ 00:05:54.651 START TEST env_vtophys 00:05:54.651 ************************************ 00:05:54.651 06:44:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:54.651 EAL: lib.eal log level changed from notice to debug 00:05:54.651 EAL: Detected lcore 0 as core 0 on socket 0 00:05:54.651 EAL: Detected lcore 1 as core 0 on socket 0 00:05:54.651 EAL: Detected lcore 2 as core 0 on socket 0 00:05:54.651 EAL: Detected lcore 3 as core 0 on socket 0 00:05:54.651 EAL: Detected lcore 4 as core 0 on socket 0 00:05:54.651 EAL: Detected lcore 5 as core 0 on socket 0 00:05:54.651 EAL: Detected lcore 6 as core 0 on socket 0 00:05:54.651 EAL: Detected lcore 7 as core 0 on socket 0 00:05:54.651 EAL: Detected lcore 8 as core 0 on socket 0 00:05:54.651 EAL: Detected lcore 9 as core 0 on socket 0 00:05:54.651 EAL: Maximum logical cores by configuration: 128 00:05:54.651 EAL: Detected CPU lcores: 10 00:05:54.651 EAL: Detected NUMA nodes: 1 00:05:54.651 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:54.651 EAL: Detected shared linkage of DPDK 00:05:54.651 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:54.651 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:54.651 EAL: Registered [vdev] bus. 00:05:54.651 EAL: bus.vdev log level changed from disabled to notice 00:05:54.651 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:54.651 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:54.651 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:54.651 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:54.651 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:54.651 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:54.651 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:54.651 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:54.651 EAL: No shared files mode enabled, IPC will be disabled 00:05:54.651 EAL: No shared files mode enabled, IPC is disabled 00:05:54.651 EAL: Selected IOVA mode 'PA' 00:05:54.651 EAL: Probing VFIO support... 00:05:54.651 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:54.651 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:54.651 EAL: Ask a virtual area of 0x2e000 bytes 00:05:54.651 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:54.651 EAL: Setting up physically contiguous memory... 00:05:54.651 EAL: Setting maximum number of open files to 524288 00:05:54.651 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:54.651 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:54.651 EAL: Ask a virtual area of 0x61000 bytes 00:05:54.651 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:54.651 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:54.651 EAL: Ask a virtual area of 0x400000000 bytes 00:05:54.651 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:54.651 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:54.651 EAL: Ask a virtual area of 0x61000 bytes 00:05:54.651 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:54.651 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:54.651 EAL: Ask a virtual area of 0x400000000 bytes 00:05:54.651 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:54.651 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:54.651 EAL: Ask a virtual area of 0x61000 bytes 00:05:54.651 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:54.651 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:54.651 EAL: Ask a virtual area of 0x400000000 bytes 00:05:54.651 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:54.651 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:54.651 EAL: Ask a virtual area of 0x61000 bytes 00:05:54.651 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:54.651 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:54.651 EAL: Ask a virtual area of 0x400000000 bytes 00:05:54.651 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:54.651 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:54.651 EAL: Hugepages will be freed exactly as allocated. 00:05:54.651 EAL: No shared files mode enabled, IPC is disabled 00:05:54.651 EAL: No shared files mode enabled, IPC is disabled 00:05:54.651 EAL: TSC frequency is ~2200000 KHz 00:05:54.651 EAL: Main lcore 0 is ready (tid=7f54a9e57a00;cpuset=[0]) 00:05:54.651 EAL: Trying to obtain current memory policy. 00:05:54.651 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:54.651 EAL: Restoring previous memory policy: 0 00:05:54.651 EAL: request: mp_malloc_sync 00:05:54.651 EAL: No shared files mode enabled, IPC is disabled 00:05:54.651 EAL: Heap on socket 0 was expanded by 2MB 00:05:54.651 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:54.651 EAL: No shared files mode enabled, IPC is disabled 00:05:54.651 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:54.651 EAL: Mem event callback 'spdk:(nil)' registered 00:05:54.651 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:54.651 00:05:54.651 00:05:54.651 CUnit - A unit testing framework for C - Version 2.1-3 00:05:54.651 http://cunit.sourceforge.net/ 00:05:54.651 00:05:54.651 00:05:54.651 Suite: components_suite 00:05:54.651 Test: vtophys_malloc_test ...passed 00:05:54.651 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:54.651 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:54.651 EAL: Restoring previous memory policy: 4 00:05:54.651 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.651 EAL: request: mp_malloc_sync 00:05:54.651 EAL: No shared files mode enabled, IPC is disabled 00:05:54.651 EAL: Heap on socket 0 was expanded by 4MB 00:05:54.651 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.651 EAL: request: mp_malloc_sync 00:05:54.651 EAL: No shared files mode enabled, IPC is disabled 00:05:54.651 EAL: Heap on socket 0 was shrunk by 4MB 00:05:54.651 EAL: Trying to obtain current memory policy. 00:05:54.651 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:54.651 EAL: Restoring previous memory policy: 4 00:05:54.651 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.651 EAL: request: mp_malloc_sync 00:05:54.651 EAL: No shared files mode enabled, IPC is disabled 00:05:54.651 EAL: Heap on socket 0 was expanded by 6MB 00:05:54.651 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.651 EAL: request: mp_malloc_sync 00:05:54.651 EAL: No shared files mode enabled, IPC is disabled 00:05:54.651 EAL: Heap on socket 0 was shrunk by 6MB 00:05:54.651 EAL: Trying to obtain current memory policy. 00:05:54.651 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:54.651 EAL: Restoring previous memory policy: 4 00:05:54.651 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.651 EAL: request: mp_malloc_sync 00:05:54.651 EAL: No shared files mode enabled, IPC is disabled 00:05:54.651 EAL: Heap on socket 0 was expanded by 10MB 00:05:54.651 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.651 EAL: request: mp_malloc_sync 00:05:54.652 EAL: No shared files mode enabled, IPC is disabled 00:05:54.652 EAL: Heap on socket 0 was shrunk by 10MB 00:05:54.652 EAL: Trying to obtain current memory policy. 00:05:54.652 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:54.911 EAL: Restoring previous memory policy: 4 00:05:54.911 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.911 EAL: request: mp_malloc_sync 00:05:54.911 EAL: No shared files mode enabled, IPC is disabled 00:05:54.911 EAL: Heap on socket 0 was expanded by 18MB 00:05:54.911 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.911 EAL: request: mp_malloc_sync 00:05:54.911 EAL: No shared files mode enabled, IPC is disabled 00:05:54.911 EAL: Heap on socket 0 was shrunk by 18MB 00:05:54.911 EAL: Trying to obtain current memory policy. 00:05:54.911 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:54.911 EAL: Restoring previous memory policy: 4 00:05:54.911 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.911 EAL: request: mp_malloc_sync 00:05:54.911 EAL: No shared files mode enabled, IPC is disabled 00:05:54.911 EAL: Heap on socket 0 was expanded by 34MB 00:05:54.911 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.911 EAL: request: mp_malloc_sync 00:05:54.911 EAL: No shared files mode enabled, IPC is disabled 00:05:54.911 EAL: Heap on socket 0 was shrunk by 34MB 00:05:54.911 EAL: Trying to obtain current memory policy. 00:05:54.911 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:54.911 EAL: Restoring previous memory policy: 4 00:05:54.911 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.911 EAL: request: mp_malloc_sync 00:05:54.911 EAL: No shared files mode enabled, IPC is disabled 00:05:54.911 EAL: Heap on socket 0 was expanded by 66MB 00:05:54.911 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.911 EAL: request: mp_malloc_sync 00:05:54.911 EAL: No shared files mode enabled, IPC is disabled 00:05:54.911 EAL: Heap on socket 0 was shrunk by 66MB 00:05:54.911 EAL: Trying to obtain current memory policy. 00:05:54.911 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:54.911 EAL: Restoring previous memory policy: 4 00:05:54.911 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.911 EAL: request: mp_malloc_sync 00:05:54.911 EAL: No shared files mode enabled, IPC is disabled 00:05:54.911 EAL: Heap on socket 0 was expanded by 130MB 00:05:54.911 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.911 EAL: request: mp_malloc_sync 00:05:54.911 EAL: No shared files mode enabled, IPC is disabled 00:05:54.911 EAL: Heap on socket 0 was shrunk by 130MB 00:05:54.911 EAL: Trying to obtain current memory policy. 00:05:54.911 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:54.911 EAL: Restoring previous memory policy: 4 00:05:54.911 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.911 EAL: request: mp_malloc_sync 00:05:54.911 EAL: No shared files mode enabled, IPC is disabled 00:05:54.911 EAL: Heap on socket 0 was expanded by 258MB 00:05:54.911 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.911 EAL: request: mp_malloc_sync 00:05:54.911 EAL: No shared files mode enabled, IPC is disabled 00:05:54.911 EAL: Heap on socket 0 was shrunk by 258MB 00:05:54.911 EAL: Trying to obtain current memory policy. 00:05:54.911 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:55.170 EAL: Restoring previous memory policy: 4 00:05:55.170 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.170 EAL: request: mp_malloc_sync 00:05:55.170 EAL: No shared files mode enabled, IPC is disabled 00:05:55.170 EAL: Heap on socket 0 was expanded by 514MB 00:05:55.170 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.170 EAL: request: mp_malloc_sync 00:05:55.170 EAL: No shared files mode enabled, IPC is disabled 00:05:55.170 EAL: Heap on socket 0 was shrunk by 514MB 00:05:55.170 EAL: Trying to obtain current memory policy. 00:05:55.170 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:55.429 EAL: Restoring previous memory policy: 4 00:05:55.429 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.429 EAL: request: mp_malloc_sync 00:05:55.429 EAL: No shared files mode enabled, IPC is disabled 00:05:55.429 EAL: Heap on socket 0 was expanded by 1026MB 00:05:55.429 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.429 passed 00:05:55.429 00:05:55.429 Run Summary: Type Total Ran Passed Failed Inactive 00:05:55.429 suites 1 1 n/a 0 0 00:05:55.429 tests 2 2 2 0 0 00:05:55.429 asserts 5148 5148 5148 0 n/a 00:05:55.429 00:05:55.429 Elapsed time = 0.742 seconds 00:05:55.429 EAL: request: mp_malloc_sync 00:05:55.429 EAL: No shared files mode enabled, IPC is disabled 00:05:55.429 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:55.429 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.429 EAL: request: mp_malloc_sync 00:05:55.429 EAL: No shared files mode enabled, IPC is disabled 00:05:55.429 EAL: Heap on socket 0 was shrunk by 2MB 00:05:55.429 EAL: No shared files mode enabled, IPC is disabled 00:05:55.430 EAL: No shared files mode enabled, IPC is disabled 00:05:55.430 EAL: No shared files mode enabled, IPC is disabled 00:05:55.430 00:05:55.430 real 0m0.943s 00:05:55.430 user 0m0.477s 00:05:55.430 sys 0m0.330s 00:05:55.430 06:44:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:55.430 06:44:59 -- common/autotest_common.sh@10 -- # set +x 00:05:55.430 ************************************ 00:05:55.430 END TEST env_vtophys 00:05:55.430 ************************************ 00:05:55.688 06:44:59 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:55.688 06:44:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:55.688 06:44:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:55.688 06:44:59 -- common/autotest_common.sh@10 -- # set +x 00:05:55.688 ************************************ 00:05:55.688 START TEST env_pci 00:05:55.688 ************************************ 00:05:55.688 06:44:59 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:55.688 00:05:55.688 00:05:55.688 CUnit - A unit testing framework for C - Version 2.1-3 00:05:55.688 http://cunit.sourceforge.net/ 00:05:55.688 00:05:55.688 00:05:55.688 Suite: pci 00:05:55.688 Test: pci_hook ...[2024-12-13 06:45:00.001118] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 65751 has claimed it 00:05:55.688 passed 00:05:55.688 00:05:55.688 Run Summary: Type Total Ran Passed Failed Inactive 00:05:55.688 suites 1 1 n/a 0 0 00:05:55.688 tests 1 1 1 0 0 00:05:55.688 asserts 25 25 25 0 n/a 00:05:55.688 00:05:55.688 Elapsed time = 0.003 seconds 00:05:55.688 EAL: Cannot find device (10000:00:01.0) 00:05:55.688 EAL: Failed to attach device on primary process 00:05:55.688 00:05:55.688 real 0m0.020s 00:05:55.688 user 0m0.008s 00:05:55.688 sys 0m0.011s 00:05:55.688 ************************************ 00:05:55.688 END TEST env_pci 00:05:55.688 ************************************ 00:05:55.688 06:45:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:55.689 06:45:00 -- common/autotest_common.sh@10 -- # set +x 00:05:55.689 06:45:00 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:55.689 06:45:00 -- env/env.sh@15 -- # uname 00:05:55.689 06:45:00 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:55.689 06:45:00 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:55.689 06:45:00 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:55.689 06:45:00 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:05:55.689 06:45:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:55.689 06:45:00 -- common/autotest_common.sh@10 -- # set +x 00:05:55.689 ************************************ 00:05:55.689 START TEST env_dpdk_post_init 00:05:55.689 ************************************ 00:05:55.689 06:45:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:55.689 EAL: Detected CPU lcores: 10 00:05:55.689 EAL: Detected NUMA nodes: 1 00:05:55.689 EAL: Detected shared linkage of DPDK 00:05:55.689 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:55.689 EAL: Selected IOVA mode 'PA' 00:05:55.947 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:55.948 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:05:55.948 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:05:55.948 Starting DPDK initialization... 00:05:55.948 Starting SPDK post initialization... 00:05:55.948 SPDK NVMe probe 00:05:55.948 Attaching to 0000:00:06.0 00:05:55.948 Attaching to 0000:00:07.0 00:05:55.948 Attached to 0000:00:06.0 00:05:55.948 Attached to 0000:00:07.0 00:05:55.948 Cleaning up... 00:05:55.948 ************************************ 00:05:55.948 END TEST env_dpdk_post_init 00:05:55.948 ************************************ 00:05:55.948 00:05:55.948 real 0m0.182s 00:05:55.948 user 0m0.053s 00:05:55.948 sys 0m0.028s 00:05:55.948 06:45:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:55.948 06:45:00 -- common/autotest_common.sh@10 -- # set +x 00:05:55.948 06:45:00 -- env/env.sh@26 -- # uname 00:05:55.948 06:45:00 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:55.948 06:45:00 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:55.948 06:45:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:55.948 06:45:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:55.948 06:45:00 -- common/autotest_common.sh@10 -- # set +x 00:05:55.948 ************************************ 00:05:55.948 START TEST env_mem_callbacks 00:05:55.948 ************************************ 00:05:55.948 06:45:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:55.948 EAL: Detected CPU lcores: 10 00:05:55.948 EAL: Detected NUMA nodes: 1 00:05:55.948 EAL: Detected shared linkage of DPDK 00:05:55.948 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:55.948 EAL: Selected IOVA mode 'PA' 00:05:55.948 00:05:55.948 00:05:55.948 CUnit - A unit testing framework for C - Version 2.1-3 00:05:55.948 http://cunit.sourceforge.net/ 00:05:55.948 00:05:55.948 00:05:55.948 Suite: memory 00:05:55.948 Test: test ... 00:05:55.948 register 0x200000200000 2097152 00:05:55.948 malloc 3145728 00:05:55.948 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:55.948 register 0x200000400000 4194304 00:05:55.948 buf 0x200000500000 len 3145728 PASSED 00:05:55.948 malloc 64 00:05:55.948 buf 0x2000004fff40 len 64 PASSED 00:05:55.948 malloc 4194304 00:05:55.948 register 0x200000800000 6291456 00:05:55.948 buf 0x200000a00000 len 4194304 PASSED 00:05:55.948 free 0x200000500000 3145728 00:05:55.948 free 0x2000004fff40 64 00:05:55.948 unregister 0x200000400000 4194304 PASSED 00:05:55.948 free 0x200000a00000 4194304 00:05:55.948 unregister 0x200000800000 6291456 PASSED 00:05:55.948 malloc 8388608 00:05:55.948 register 0x200000400000 10485760 00:05:55.948 buf 0x200000600000 len 8388608 PASSED 00:05:55.948 free 0x200000600000 8388608 00:05:55.948 unregister 0x200000400000 10485760 PASSED 00:05:55.948 passed 00:05:55.948 00:05:55.948 Run Summary: Type Total Ran Passed Failed Inactive 00:05:55.948 suites 1 1 n/a 0 0 00:05:55.948 tests 1 1 1 0 0 00:05:55.948 asserts 15 15 15 0 n/a 00:05:55.948 00:05:55.948 Elapsed time = 0.008 seconds 00:05:55.948 00:05:55.948 real 0m0.147s 00:05:55.948 user 0m0.019s 00:05:55.948 sys 0m0.025s 00:05:55.948 06:45:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:55.948 ************************************ 00:05:55.948 END TEST env_mem_callbacks 00:05:55.948 ************************************ 00:05:55.948 06:45:00 -- common/autotest_common.sh@10 -- # set +x 00:05:56.207 ************************************ 00:05:56.207 END TEST env 00:05:56.207 ************************************ 00:05:56.207 00:05:56.207 real 0m2.006s 00:05:56.207 user 0m0.990s 00:05:56.207 sys 0m0.656s 00:05:56.207 06:45:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:56.207 06:45:00 -- common/autotest_common.sh@10 -- # set +x 00:05:56.207 06:45:00 -- spdk/autotest.sh@163 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:56.207 06:45:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:56.207 06:45:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:56.207 06:45:00 -- common/autotest_common.sh@10 -- # set +x 00:05:56.207 ************************************ 00:05:56.207 START TEST rpc 00:05:56.207 ************************************ 00:05:56.207 06:45:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:56.207 * Looking for test storage... 00:05:56.207 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:56.207 06:45:00 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:56.207 06:45:00 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:56.207 06:45:00 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:56.207 06:45:00 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:56.207 06:45:00 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:56.207 06:45:00 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:56.207 06:45:00 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:56.207 06:45:00 -- scripts/common.sh@335 -- # IFS=.-: 00:05:56.207 06:45:00 -- scripts/common.sh@335 -- # read -ra ver1 00:05:56.207 06:45:00 -- scripts/common.sh@336 -- # IFS=.-: 00:05:56.207 06:45:00 -- scripts/common.sh@336 -- # read -ra ver2 00:05:56.207 06:45:00 -- scripts/common.sh@337 -- # local 'op=<' 00:05:56.207 06:45:00 -- scripts/common.sh@339 -- # ver1_l=2 00:05:56.207 06:45:00 -- scripts/common.sh@340 -- # ver2_l=1 00:05:56.207 06:45:00 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:56.207 06:45:00 -- scripts/common.sh@343 -- # case "$op" in 00:05:56.207 06:45:00 -- scripts/common.sh@344 -- # : 1 00:05:56.207 06:45:00 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:56.207 06:45:00 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:56.207 06:45:00 -- scripts/common.sh@364 -- # decimal 1 00:05:56.207 06:45:00 -- scripts/common.sh@352 -- # local d=1 00:05:56.207 06:45:00 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:56.207 06:45:00 -- scripts/common.sh@354 -- # echo 1 00:05:56.207 06:45:00 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:56.207 06:45:00 -- scripts/common.sh@365 -- # decimal 2 00:05:56.207 06:45:00 -- scripts/common.sh@352 -- # local d=2 00:05:56.466 06:45:00 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:56.466 06:45:00 -- scripts/common.sh@354 -- # echo 2 00:05:56.466 06:45:00 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:56.466 06:45:00 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:56.466 06:45:00 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:56.466 06:45:00 -- scripts/common.sh@367 -- # return 0 00:05:56.466 06:45:00 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:56.466 06:45:00 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:56.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.466 --rc genhtml_branch_coverage=1 00:05:56.466 --rc genhtml_function_coverage=1 00:05:56.466 --rc genhtml_legend=1 00:05:56.466 --rc geninfo_all_blocks=1 00:05:56.466 --rc geninfo_unexecuted_blocks=1 00:05:56.466 00:05:56.466 ' 00:05:56.466 06:45:00 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:56.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.466 --rc genhtml_branch_coverage=1 00:05:56.466 --rc genhtml_function_coverage=1 00:05:56.466 --rc genhtml_legend=1 00:05:56.466 --rc geninfo_all_blocks=1 00:05:56.466 --rc geninfo_unexecuted_blocks=1 00:05:56.466 00:05:56.466 ' 00:05:56.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.466 06:45:00 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:56.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.466 --rc genhtml_branch_coverage=1 00:05:56.466 --rc genhtml_function_coverage=1 00:05:56.466 --rc genhtml_legend=1 00:05:56.466 --rc geninfo_all_blocks=1 00:05:56.466 --rc geninfo_unexecuted_blocks=1 00:05:56.466 00:05:56.466 ' 00:05:56.466 06:45:00 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:56.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.466 --rc genhtml_branch_coverage=1 00:05:56.466 --rc genhtml_function_coverage=1 00:05:56.466 --rc genhtml_legend=1 00:05:56.466 --rc geninfo_all_blocks=1 00:05:56.466 --rc geninfo_unexecuted_blocks=1 00:05:56.466 00:05:56.466 ' 00:05:56.466 06:45:00 -- rpc/rpc.sh@65 -- # spdk_pid=65868 00:05:56.466 06:45:00 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:56.466 06:45:00 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:56.466 06:45:00 -- rpc/rpc.sh@67 -- # waitforlisten 65868 00:05:56.466 06:45:00 -- common/autotest_common.sh@829 -- # '[' -z 65868 ']' 00:05:56.466 06:45:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.466 06:45:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:56.466 06:45:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.466 06:45:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:56.466 06:45:00 -- common/autotest_common.sh@10 -- # set +x 00:05:56.466 [2024-12-13 06:45:00.798409] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:56.466 [2024-12-13 06:45:00.798779] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65868 ] 00:05:56.466 [2024-12-13 06:45:00.936189] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.466 [2024-12-13 06:45:00.974635] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:56.466 [2024-12-13 06:45:00.975017] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:56.466 [2024-12-13 06:45:00.975150] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 65868' to capture a snapshot of events at runtime. 00:05:56.466 [2024-12-13 06:45:00.975341] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid65868 for offline analysis/debug. 00:05:56.466 [2024-12-13 06:45:00.975617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.403 06:45:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:57.403 06:45:01 -- common/autotest_common.sh@862 -- # return 0 00:05:57.403 06:45:01 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:57.403 06:45:01 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:57.403 06:45:01 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:57.403 06:45:01 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:57.403 06:45:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:57.403 06:45:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:57.403 06:45:01 -- common/autotest_common.sh@10 -- # set +x 00:05:57.403 ************************************ 00:05:57.403 START TEST rpc_integrity 00:05:57.403 ************************************ 00:05:57.403 06:45:01 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:05:57.403 06:45:01 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:57.403 06:45:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.403 06:45:01 -- common/autotest_common.sh@10 -- # set +x 00:05:57.403 06:45:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.403 06:45:01 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:57.403 06:45:01 -- rpc/rpc.sh@13 -- # jq length 00:05:57.403 06:45:01 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:57.403 06:45:01 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:57.403 06:45:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.403 06:45:01 -- common/autotest_common.sh@10 -- # set +x 00:05:57.403 06:45:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.403 06:45:01 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:57.403 06:45:01 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:57.403 06:45:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.403 06:45:01 -- common/autotest_common.sh@10 -- # set +x 00:05:57.662 06:45:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.662 06:45:01 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:57.662 { 00:05:57.662 "name": "Malloc0", 00:05:57.662 "aliases": [ 00:05:57.662 "9200f98f-a1d1-441d-b6f4-410b7f51a049" 00:05:57.662 ], 00:05:57.662 "product_name": "Malloc disk", 00:05:57.662 "block_size": 512, 00:05:57.662 "num_blocks": 16384, 00:05:57.662 "uuid": "9200f98f-a1d1-441d-b6f4-410b7f51a049", 00:05:57.662 "assigned_rate_limits": { 00:05:57.662 "rw_ios_per_sec": 0, 00:05:57.662 "rw_mbytes_per_sec": 0, 00:05:57.662 "r_mbytes_per_sec": 0, 00:05:57.662 "w_mbytes_per_sec": 0 00:05:57.662 }, 00:05:57.662 "claimed": false, 00:05:57.662 "zoned": false, 00:05:57.662 "supported_io_types": { 00:05:57.662 "read": true, 00:05:57.662 "write": true, 00:05:57.662 "unmap": true, 00:05:57.662 "write_zeroes": true, 00:05:57.662 "flush": true, 00:05:57.662 "reset": true, 00:05:57.662 "compare": false, 00:05:57.662 "compare_and_write": false, 00:05:57.662 "abort": true, 00:05:57.662 "nvme_admin": false, 00:05:57.662 "nvme_io": false 00:05:57.662 }, 00:05:57.662 "memory_domains": [ 00:05:57.662 { 00:05:57.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:57.663 "dma_device_type": 2 00:05:57.663 } 00:05:57.663 ], 00:05:57.663 "driver_specific": {} 00:05:57.663 } 00:05:57.663 ]' 00:05:57.663 06:45:01 -- rpc/rpc.sh@17 -- # jq length 00:05:57.663 06:45:01 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:57.663 06:45:01 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:57.663 06:45:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.663 06:45:01 -- common/autotest_common.sh@10 -- # set +x 00:05:57.663 [2024-12-13 06:45:01.985636] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:57.663 [2024-12-13 06:45:01.985699] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:57.663 [2024-12-13 06:45:01.985719] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd04030 00:05:57.663 [2024-12-13 06:45:01.985743] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:57.663 [2024-12-13 06:45:01.987426] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:57.663 [2024-12-13 06:45:01.987475] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:57.663 Passthru0 00:05:57.663 06:45:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.663 06:45:01 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:57.663 06:45:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.663 06:45:01 -- common/autotest_common.sh@10 -- # set +x 00:05:57.663 06:45:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.663 06:45:02 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:57.663 { 00:05:57.663 "name": "Malloc0", 00:05:57.663 "aliases": [ 00:05:57.663 "9200f98f-a1d1-441d-b6f4-410b7f51a049" 00:05:57.663 ], 00:05:57.663 "product_name": "Malloc disk", 00:05:57.663 "block_size": 512, 00:05:57.663 "num_blocks": 16384, 00:05:57.663 "uuid": "9200f98f-a1d1-441d-b6f4-410b7f51a049", 00:05:57.663 "assigned_rate_limits": { 00:05:57.663 "rw_ios_per_sec": 0, 00:05:57.663 "rw_mbytes_per_sec": 0, 00:05:57.663 "r_mbytes_per_sec": 0, 00:05:57.663 "w_mbytes_per_sec": 0 00:05:57.663 }, 00:05:57.663 "claimed": true, 00:05:57.663 "claim_type": "exclusive_write", 00:05:57.663 "zoned": false, 00:05:57.663 "supported_io_types": { 00:05:57.663 "read": true, 00:05:57.663 "write": true, 00:05:57.663 "unmap": true, 00:05:57.663 "write_zeroes": true, 00:05:57.663 "flush": true, 00:05:57.663 "reset": true, 00:05:57.663 "compare": false, 00:05:57.663 "compare_and_write": false, 00:05:57.663 "abort": true, 00:05:57.663 "nvme_admin": false, 00:05:57.663 "nvme_io": false 00:05:57.663 }, 00:05:57.663 "memory_domains": [ 00:05:57.663 { 00:05:57.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:57.663 "dma_device_type": 2 00:05:57.663 } 00:05:57.663 ], 00:05:57.663 "driver_specific": {} 00:05:57.663 }, 00:05:57.663 { 00:05:57.663 "name": "Passthru0", 00:05:57.663 "aliases": [ 00:05:57.663 "786bf141-9db3-5d04-9514-137ad3b9d6ae" 00:05:57.663 ], 00:05:57.663 "product_name": "passthru", 00:05:57.663 "block_size": 512, 00:05:57.663 "num_blocks": 16384, 00:05:57.663 "uuid": "786bf141-9db3-5d04-9514-137ad3b9d6ae", 00:05:57.663 "assigned_rate_limits": { 00:05:57.663 "rw_ios_per_sec": 0, 00:05:57.663 "rw_mbytes_per_sec": 0, 00:05:57.663 "r_mbytes_per_sec": 0, 00:05:57.663 "w_mbytes_per_sec": 0 00:05:57.663 }, 00:05:57.663 "claimed": false, 00:05:57.663 "zoned": false, 00:05:57.663 "supported_io_types": { 00:05:57.663 "read": true, 00:05:57.663 "write": true, 00:05:57.663 "unmap": true, 00:05:57.663 "write_zeroes": true, 00:05:57.663 "flush": true, 00:05:57.663 "reset": true, 00:05:57.663 "compare": false, 00:05:57.663 "compare_and_write": false, 00:05:57.663 "abort": true, 00:05:57.663 "nvme_admin": false, 00:05:57.663 "nvme_io": false 00:05:57.663 }, 00:05:57.663 "memory_domains": [ 00:05:57.663 { 00:05:57.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:57.663 "dma_device_type": 2 00:05:57.663 } 00:05:57.663 ], 00:05:57.663 "driver_specific": { 00:05:57.663 "passthru": { 00:05:57.663 "name": "Passthru0", 00:05:57.663 "base_bdev_name": "Malloc0" 00:05:57.663 } 00:05:57.663 } 00:05:57.663 } 00:05:57.663 ]' 00:05:57.663 06:45:02 -- rpc/rpc.sh@21 -- # jq length 00:05:57.663 06:45:02 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:57.663 06:45:02 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:57.663 06:45:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.663 06:45:02 -- common/autotest_common.sh@10 -- # set +x 00:05:57.663 06:45:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.663 06:45:02 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:57.663 06:45:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.663 06:45:02 -- common/autotest_common.sh@10 -- # set +x 00:05:57.663 06:45:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.663 06:45:02 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:57.663 06:45:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.663 06:45:02 -- common/autotest_common.sh@10 -- # set +x 00:05:57.663 06:45:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.663 06:45:02 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:57.663 06:45:02 -- rpc/rpc.sh@26 -- # jq length 00:05:57.663 ************************************ 00:05:57.663 END TEST rpc_integrity 00:05:57.663 ************************************ 00:05:57.663 06:45:02 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:57.663 00:05:57.663 real 0m0.324s 00:05:57.663 user 0m0.220s 00:05:57.663 sys 0m0.034s 00:05:57.663 06:45:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:57.663 06:45:02 -- common/autotest_common.sh@10 -- # set +x 00:05:57.922 06:45:02 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:57.922 06:45:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:57.922 06:45:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:57.922 06:45:02 -- common/autotest_common.sh@10 -- # set +x 00:05:57.922 ************************************ 00:05:57.922 START TEST rpc_plugins 00:05:57.922 ************************************ 00:05:57.922 06:45:02 -- common/autotest_common.sh@1114 -- # rpc_plugins 00:05:57.922 06:45:02 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:57.922 06:45:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.922 06:45:02 -- common/autotest_common.sh@10 -- # set +x 00:05:57.922 06:45:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.922 06:45:02 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:57.922 06:45:02 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:57.922 06:45:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.922 06:45:02 -- common/autotest_common.sh@10 -- # set +x 00:05:57.922 06:45:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.922 06:45:02 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:57.922 { 00:05:57.922 "name": "Malloc1", 00:05:57.922 "aliases": [ 00:05:57.922 "6465b6e1-d966-4bdc-a059-90da0a8ccecb" 00:05:57.922 ], 00:05:57.922 "product_name": "Malloc disk", 00:05:57.922 "block_size": 4096, 00:05:57.922 "num_blocks": 256, 00:05:57.922 "uuid": "6465b6e1-d966-4bdc-a059-90da0a8ccecb", 00:05:57.922 "assigned_rate_limits": { 00:05:57.922 "rw_ios_per_sec": 0, 00:05:57.922 "rw_mbytes_per_sec": 0, 00:05:57.922 "r_mbytes_per_sec": 0, 00:05:57.922 "w_mbytes_per_sec": 0 00:05:57.922 }, 00:05:57.922 "claimed": false, 00:05:57.922 "zoned": false, 00:05:57.922 "supported_io_types": { 00:05:57.922 "read": true, 00:05:57.922 "write": true, 00:05:57.922 "unmap": true, 00:05:57.922 "write_zeroes": true, 00:05:57.922 "flush": true, 00:05:57.922 "reset": true, 00:05:57.922 "compare": false, 00:05:57.922 "compare_and_write": false, 00:05:57.922 "abort": true, 00:05:57.922 "nvme_admin": false, 00:05:57.922 "nvme_io": false 00:05:57.922 }, 00:05:57.922 "memory_domains": [ 00:05:57.922 { 00:05:57.922 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:57.922 "dma_device_type": 2 00:05:57.922 } 00:05:57.922 ], 00:05:57.922 "driver_specific": {} 00:05:57.922 } 00:05:57.922 ]' 00:05:57.922 06:45:02 -- rpc/rpc.sh@32 -- # jq length 00:05:57.922 06:45:02 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:57.922 06:45:02 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:57.922 06:45:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.922 06:45:02 -- common/autotest_common.sh@10 -- # set +x 00:05:57.922 06:45:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.922 06:45:02 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:57.922 06:45:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.922 06:45:02 -- common/autotest_common.sh@10 -- # set +x 00:05:57.922 06:45:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.922 06:45:02 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:57.922 06:45:02 -- rpc/rpc.sh@36 -- # jq length 00:05:57.922 ************************************ 00:05:57.922 END TEST rpc_plugins 00:05:57.922 ************************************ 00:05:57.922 06:45:02 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:57.922 00:05:57.922 real 0m0.168s 00:05:57.922 user 0m0.117s 00:05:57.922 sys 0m0.018s 00:05:57.922 06:45:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:57.922 06:45:02 -- common/autotest_common.sh@10 -- # set +x 00:05:57.922 06:45:02 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:57.922 06:45:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:57.922 06:45:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:57.922 06:45:02 -- common/autotest_common.sh@10 -- # set +x 00:05:57.922 ************************************ 00:05:57.922 START TEST rpc_trace_cmd_test 00:05:57.922 ************************************ 00:05:57.922 06:45:02 -- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test 00:05:57.922 06:45:02 -- rpc/rpc.sh@40 -- # local info 00:05:57.922 06:45:02 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:57.922 06:45:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.922 06:45:02 -- common/autotest_common.sh@10 -- # set +x 00:05:58.182 06:45:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.182 06:45:02 -- rpc/rpc.sh@42 -- # info='{ 00:05:58.182 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid65868", 00:05:58.182 "tpoint_group_mask": "0x8", 00:05:58.182 "iscsi_conn": { 00:05:58.182 "mask": "0x2", 00:05:58.182 "tpoint_mask": "0x0" 00:05:58.182 }, 00:05:58.182 "scsi": { 00:05:58.182 "mask": "0x4", 00:05:58.182 "tpoint_mask": "0x0" 00:05:58.182 }, 00:05:58.182 "bdev": { 00:05:58.182 "mask": "0x8", 00:05:58.182 "tpoint_mask": "0xffffffffffffffff" 00:05:58.182 }, 00:05:58.182 "nvmf_rdma": { 00:05:58.182 "mask": "0x10", 00:05:58.182 "tpoint_mask": "0x0" 00:05:58.182 }, 00:05:58.182 "nvmf_tcp": { 00:05:58.182 "mask": "0x20", 00:05:58.182 "tpoint_mask": "0x0" 00:05:58.182 }, 00:05:58.182 "ftl": { 00:05:58.182 "mask": "0x40", 00:05:58.182 "tpoint_mask": "0x0" 00:05:58.182 }, 00:05:58.182 "blobfs": { 00:05:58.182 "mask": "0x80", 00:05:58.182 "tpoint_mask": "0x0" 00:05:58.182 }, 00:05:58.182 "dsa": { 00:05:58.182 "mask": "0x200", 00:05:58.182 "tpoint_mask": "0x0" 00:05:58.182 }, 00:05:58.182 "thread": { 00:05:58.182 "mask": "0x400", 00:05:58.182 "tpoint_mask": "0x0" 00:05:58.182 }, 00:05:58.182 "nvme_pcie": { 00:05:58.182 "mask": "0x800", 00:05:58.182 "tpoint_mask": "0x0" 00:05:58.182 }, 00:05:58.182 "iaa": { 00:05:58.182 "mask": "0x1000", 00:05:58.182 "tpoint_mask": "0x0" 00:05:58.182 }, 00:05:58.182 "nvme_tcp": { 00:05:58.182 "mask": "0x2000", 00:05:58.182 "tpoint_mask": "0x0" 00:05:58.182 }, 00:05:58.182 "bdev_nvme": { 00:05:58.182 "mask": "0x4000", 00:05:58.182 "tpoint_mask": "0x0" 00:05:58.182 } 00:05:58.182 }' 00:05:58.182 06:45:02 -- rpc/rpc.sh@43 -- # jq length 00:05:58.182 06:45:02 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:05:58.182 06:45:02 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:58.182 06:45:02 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:58.182 06:45:02 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:58.182 06:45:02 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:58.182 06:45:02 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:58.182 06:45:02 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:58.182 06:45:02 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:58.441 ************************************ 00:05:58.441 END TEST rpc_trace_cmd_test 00:05:58.441 ************************************ 00:05:58.441 06:45:02 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:58.441 00:05:58.441 real 0m0.291s 00:05:58.441 user 0m0.253s 00:05:58.441 sys 0m0.027s 00:05:58.441 06:45:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:58.441 06:45:02 -- common/autotest_common.sh@10 -- # set +x 00:05:58.441 06:45:02 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:58.441 06:45:02 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:58.441 06:45:02 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:58.441 06:45:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:58.441 06:45:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:58.441 06:45:02 -- common/autotest_common.sh@10 -- # set +x 00:05:58.441 ************************************ 00:05:58.441 START TEST rpc_daemon_integrity 00:05:58.441 ************************************ 00:05:58.441 06:45:02 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:05:58.441 06:45:02 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:58.441 06:45:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.441 06:45:02 -- common/autotest_common.sh@10 -- # set +x 00:05:58.441 06:45:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.441 06:45:02 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:58.441 06:45:02 -- rpc/rpc.sh@13 -- # jq length 00:05:58.441 06:45:02 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:58.441 06:45:02 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:58.441 06:45:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.441 06:45:02 -- common/autotest_common.sh@10 -- # set +x 00:05:58.441 06:45:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.441 06:45:02 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:58.441 06:45:02 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:58.441 06:45:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.441 06:45:02 -- common/autotest_common.sh@10 -- # set +x 00:05:58.441 06:45:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.441 06:45:02 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:58.441 { 00:05:58.441 "name": "Malloc2", 00:05:58.441 "aliases": [ 00:05:58.441 "709d1a89-a128-4b06-bc8f-e1cf65429483" 00:05:58.441 ], 00:05:58.441 "product_name": "Malloc disk", 00:05:58.441 "block_size": 512, 00:05:58.441 "num_blocks": 16384, 00:05:58.441 "uuid": "709d1a89-a128-4b06-bc8f-e1cf65429483", 00:05:58.441 "assigned_rate_limits": { 00:05:58.441 "rw_ios_per_sec": 0, 00:05:58.441 "rw_mbytes_per_sec": 0, 00:05:58.441 "r_mbytes_per_sec": 0, 00:05:58.441 "w_mbytes_per_sec": 0 00:05:58.441 }, 00:05:58.441 "claimed": false, 00:05:58.441 "zoned": false, 00:05:58.441 "supported_io_types": { 00:05:58.442 "read": true, 00:05:58.442 "write": true, 00:05:58.442 "unmap": true, 00:05:58.442 "write_zeroes": true, 00:05:58.442 "flush": true, 00:05:58.442 "reset": true, 00:05:58.442 "compare": false, 00:05:58.442 "compare_and_write": false, 00:05:58.442 "abort": true, 00:05:58.442 "nvme_admin": false, 00:05:58.442 "nvme_io": false 00:05:58.442 }, 00:05:58.442 "memory_domains": [ 00:05:58.442 { 00:05:58.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:58.442 "dma_device_type": 2 00:05:58.442 } 00:05:58.442 ], 00:05:58.442 "driver_specific": {} 00:05:58.442 } 00:05:58.442 ]' 00:05:58.442 06:45:02 -- rpc/rpc.sh@17 -- # jq length 00:05:58.442 06:45:02 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:58.442 06:45:02 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:58.442 06:45:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.442 06:45:02 -- common/autotest_common.sh@10 -- # set +x 00:05:58.442 [2024-12-13 06:45:02.942109] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:58.442 [2024-12-13 06:45:02.942181] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:58.442 [2024-12-13 06:45:02.942202] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd049d0 00:05:58.442 [2024-12-13 06:45:02.942211] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:58.442 [2024-12-13 06:45:02.943793] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:58.442 [2024-12-13 06:45:02.943825] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:58.442 Passthru0 00:05:58.442 06:45:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.442 06:45:02 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:58.442 06:45:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.442 06:45:02 -- common/autotest_common.sh@10 -- # set +x 00:05:58.701 06:45:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.701 06:45:02 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:58.701 { 00:05:58.701 "name": "Malloc2", 00:05:58.701 "aliases": [ 00:05:58.701 "709d1a89-a128-4b06-bc8f-e1cf65429483" 00:05:58.701 ], 00:05:58.701 "product_name": "Malloc disk", 00:05:58.701 "block_size": 512, 00:05:58.701 "num_blocks": 16384, 00:05:58.701 "uuid": "709d1a89-a128-4b06-bc8f-e1cf65429483", 00:05:58.701 "assigned_rate_limits": { 00:05:58.701 "rw_ios_per_sec": 0, 00:05:58.701 "rw_mbytes_per_sec": 0, 00:05:58.701 "r_mbytes_per_sec": 0, 00:05:58.701 "w_mbytes_per_sec": 0 00:05:58.701 }, 00:05:58.701 "claimed": true, 00:05:58.701 "claim_type": "exclusive_write", 00:05:58.701 "zoned": false, 00:05:58.701 "supported_io_types": { 00:05:58.701 "read": true, 00:05:58.701 "write": true, 00:05:58.701 "unmap": true, 00:05:58.701 "write_zeroes": true, 00:05:58.701 "flush": true, 00:05:58.701 "reset": true, 00:05:58.701 "compare": false, 00:05:58.701 "compare_and_write": false, 00:05:58.701 "abort": true, 00:05:58.701 "nvme_admin": false, 00:05:58.701 "nvme_io": false 00:05:58.701 }, 00:05:58.701 "memory_domains": [ 00:05:58.701 { 00:05:58.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:58.701 "dma_device_type": 2 00:05:58.701 } 00:05:58.701 ], 00:05:58.701 "driver_specific": {} 00:05:58.701 }, 00:05:58.701 { 00:05:58.701 "name": "Passthru0", 00:05:58.701 "aliases": [ 00:05:58.701 "64b510df-3544-5ded-aee7-24c5cb5e08a7" 00:05:58.701 ], 00:05:58.701 "product_name": "passthru", 00:05:58.701 "block_size": 512, 00:05:58.701 "num_blocks": 16384, 00:05:58.701 "uuid": "64b510df-3544-5ded-aee7-24c5cb5e08a7", 00:05:58.701 "assigned_rate_limits": { 00:05:58.701 "rw_ios_per_sec": 0, 00:05:58.701 "rw_mbytes_per_sec": 0, 00:05:58.701 "r_mbytes_per_sec": 0, 00:05:58.701 "w_mbytes_per_sec": 0 00:05:58.701 }, 00:05:58.701 "claimed": false, 00:05:58.701 "zoned": false, 00:05:58.701 "supported_io_types": { 00:05:58.701 "read": true, 00:05:58.701 "write": true, 00:05:58.701 "unmap": true, 00:05:58.701 "write_zeroes": true, 00:05:58.701 "flush": true, 00:05:58.701 "reset": true, 00:05:58.701 "compare": false, 00:05:58.701 "compare_and_write": false, 00:05:58.701 "abort": true, 00:05:58.701 "nvme_admin": false, 00:05:58.701 "nvme_io": false 00:05:58.701 }, 00:05:58.701 "memory_domains": [ 00:05:58.701 { 00:05:58.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:58.701 "dma_device_type": 2 00:05:58.701 } 00:05:58.701 ], 00:05:58.701 "driver_specific": { 00:05:58.701 "passthru": { 00:05:58.701 "name": "Passthru0", 00:05:58.701 "base_bdev_name": "Malloc2" 00:05:58.701 } 00:05:58.701 } 00:05:58.701 } 00:05:58.701 ]' 00:05:58.701 06:45:02 -- rpc/rpc.sh@21 -- # jq length 00:05:58.701 06:45:03 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:58.701 06:45:03 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:58.701 06:45:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.701 06:45:03 -- common/autotest_common.sh@10 -- # set +x 00:05:58.701 06:45:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.701 06:45:03 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:58.701 06:45:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.701 06:45:03 -- common/autotest_common.sh@10 -- # set +x 00:05:58.701 06:45:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.701 06:45:03 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:58.701 06:45:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.701 06:45:03 -- common/autotest_common.sh@10 -- # set +x 00:05:58.701 06:45:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.701 06:45:03 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:58.701 06:45:03 -- rpc/rpc.sh@26 -- # jq length 00:05:58.701 ************************************ 00:05:58.701 END TEST rpc_daemon_integrity 00:05:58.701 ************************************ 00:05:58.701 06:45:03 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:58.701 00:05:58.701 real 0m0.328s 00:05:58.701 user 0m0.221s 00:05:58.701 sys 0m0.040s 00:05:58.701 06:45:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:58.701 06:45:03 -- common/autotest_common.sh@10 -- # set +x 00:05:58.701 06:45:03 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:58.701 06:45:03 -- rpc/rpc.sh@84 -- # killprocess 65868 00:05:58.701 06:45:03 -- common/autotest_common.sh@936 -- # '[' -z 65868 ']' 00:05:58.701 06:45:03 -- common/autotest_common.sh@940 -- # kill -0 65868 00:05:58.701 06:45:03 -- common/autotest_common.sh@941 -- # uname 00:05:58.701 06:45:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:58.701 06:45:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65868 00:05:58.701 killing process with pid 65868 00:05:58.701 06:45:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:58.701 06:45:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:58.701 06:45:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65868' 00:05:58.701 06:45:03 -- common/autotest_common.sh@955 -- # kill 65868 00:05:58.701 06:45:03 -- common/autotest_common.sh@960 -- # wait 65868 00:05:58.961 00:05:58.961 real 0m2.904s 00:05:58.961 user 0m3.893s 00:05:58.961 sys 0m0.617s 00:05:58.961 06:45:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:58.961 06:45:03 -- common/autotest_common.sh@10 -- # set +x 00:05:58.961 ************************************ 00:05:58.961 END TEST rpc 00:05:58.961 ************************************ 00:05:59.220 06:45:03 -- spdk/autotest.sh@164 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:59.220 06:45:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:59.220 06:45:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:59.220 06:45:03 -- common/autotest_common.sh@10 -- # set +x 00:05:59.220 ************************************ 00:05:59.220 START TEST rpc_client 00:05:59.220 ************************************ 00:05:59.220 06:45:03 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:59.220 * Looking for test storage... 00:05:59.220 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:59.220 06:45:03 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:59.220 06:45:03 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:59.220 06:45:03 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:59.220 06:45:03 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:59.220 06:45:03 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:59.220 06:45:03 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:59.220 06:45:03 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:59.220 06:45:03 -- scripts/common.sh@335 -- # IFS=.-: 00:05:59.220 06:45:03 -- scripts/common.sh@335 -- # read -ra ver1 00:05:59.220 06:45:03 -- scripts/common.sh@336 -- # IFS=.-: 00:05:59.220 06:45:03 -- scripts/common.sh@336 -- # read -ra ver2 00:05:59.220 06:45:03 -- scripts/common.sh@337 -- # local 'op=<' 00:05:59.220 06:45:03 -- scripts/common.sh@339 -- # ver1_l=2 00:05:59.220 06:45:03 -- scripts/common.sh@340 -- # ver2_l=1 00:05:59.220 06:45:03 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:59.220 06:45:03 -- scripts/common.sh@343 -- # case "$op" in 00:05:59.220 06:45:03 -- scripts/common.sh@344 -- # : 1 00:05:59.220 06:45:03 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:59.220 06:45:03 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:59.220 06:45:03 -- scripts/common.sh@364 -- # decimal 1 00:05:59.220 06:45:03 -- scripts/common.sh@352 -- # local d=1 00:05:59.220 06:45:03 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:59.220 06:45:03 -- scripts/common.sh@354 -- # echo 1 00:05:59.220 06:45:03 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:59.220 06:45:03 -- scripts/common.sh@365 -- # decimal 2 00:05:59.220 06:45:03 -- scripts/common.sh@352 -- # local d=2 00:05:59.220 06:45:03 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:59.220 06:45:03 -- scripts/common.sh@354 -- # echo 2 00:05:59.220 06:45:03 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:59.220 06:45:03 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:59.220 06:45:03 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:59.220 06:45:03 -- scripts/common.sh@367 -- # return 0 00:05:59.220 06:45:03 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:59.220 06:45:03 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:59.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.220 --rc genhtml_branch_coverage=1 00:05:59.220 --rc genhtml_function_coverage=1 00:05:59.220 --rc genhtml_legend=1 00:05:59.220 --rc geninfo_all_blocks=1 00:05:59.220 --rc geninfo_unexecuted_blocks=1 00:05:59.220 00:05:59.220 ' 00:05:59.220 06:45:03 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:59.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.220 --rc genhtml_branch_coverage=1 00:05:59.220 --rc genhtml_function_coverage=1 00:05:59.220 --rc genhtml_legend=1 00:05:59.220 --rc geninfo_all_blocks=1 00:05:59.220 --rc geninfo_unexecuted_blocks=1 00:05:59.220 00:05:59.220 ' 00:05:59.220 06:45:03 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:59.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.220 --rc genhtml_branch_coverage=1 00:05:59.220 --rc genhtml_function_coverage=1 00:05:59.220 --rc genhtml_legend=1 00:05:59.220 --rc geninfo_all_blocks=1 00:05:59.220 --rc geninfo_unexecuted_blocks=1 00:05:59.220 00:05:59.220 ' 00:05:59.220 06:45:03 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:59.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.220 --rc genhtml_branch_coverage=1 00:05:59.220 --rc genhtml_function_coverage=1 00:05:59.220 --rc genhtml_legend=1 00:05:59.220 --rc geninfo_all_blocks=1 00:05:59.220 --rc geninfo_unexecuted_blocks=1 00:05:59.220 00:05:59.220 ' 00:05:59.220 06:45:03 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:59.220 OK 00:05:59.220 06:45:03 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:59.220 00:05:59.220 real 0m0.211s 00:05:59.220 user 0m0.124s 00:05:59.220 sys 0m0.093s 00:05:59.220 06:45:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:59.220 06:45:03 -- common/autotest_common.sh@10 -- # set +x 00:05:59.220 ************************************ 00:05:59.220 END TEST rpc_client 00:05:59.220 ************************************ 00:05:59.480 06:45:03 -- spdk/autotest.sh@165 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:59.480 06:45:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:59.480 06:45:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:59.480 06:45:03 -- common/autotest_common.sh@10 -- # set +x 00:05:59.480 ************************************ 00:05:59.480 START TEST json_config 00:05:59.480 ************************************ 00:05:59.480 06:45:03 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:59.480 06:45:03 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:59.480 06:45:03 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:59.480 06:45:03 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:59.480 06:45:03 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:59.480 06:45:03 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:59.480 06:45:03 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:59.480 06:45:03 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:59.480 06:45:03 -- scripts/common.sh@335 -- # IFS=.-: 00:05:59.480 06:45:03 -- scripts/common.sh@335 -- # read -ra ver1 00:05:59.480 06:45:03 -- scripts/common.sh@336 -- # IFS=.-: 00:05:59.480 06:45:03 -- scripts/common.sh@336 -- # read -ra ver2 00:05:59.480 06:45:03 -- scripts/common.sh@337 -- # local 'op=<' 00:05:59.480 06:45:03 -- scripts/common.sh@339 -- # ver1_l=2 00:05:59.480 06:45:03 -- scripts/common.sh@340 -- # ver2_l=1 00:05:59.480 06:45:03 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:59.480 06:45:03 -- scripts/common.sh@343 -- # case "$op" in 00:05:59.480 06:45:03 -- scripts/common.sh@344 -- # : 1 00:05:59.480 06:45:03 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:59.480 06:45:03 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:59.480 06:45:03 -- scripts/common.sh@364 -- # decimal 1 00:05:59.480 06:45:03 -- scripts/common.sh@352 -- # local d=1 00:05:59.480 06:45:03 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:59.480 06:45:03 -- scripts/common.sh@354 -- # echo 1 00:05:59.480 06:45:03 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:59.480 06:45:03 -- scripts/common.sh@365 -- # decimal 2 00:05:59.480 06:45:03 -- scripts/common.sh@352 -- # local d=2 00:05:59.480 06:45:03 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:59.480 06:45:03 -- scripts/common.sh@354 -- # echo 2 00:05:59.480 06:45:03 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:59.480 06:45:03 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:59.480 06:45:03 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:59.480 06:45:03 -- scripts/common.sh@367 -- # return 0 00:05:59.480 06:45:03 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:59.480 06:45:03 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:59.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.480 --rc genhtml_branch_coverage=1 00:05:59.480 --rc genhtml_function_coverage=1 00:05:59.480 --rc genhtml_legend=1 00:05:59.480 --rc geninfo_all_blocks=1 00:05:59.480 --rc geninfo_unexecuted_blocks=1 00:05:59.480 00:05:59.480 ' 00:05:59.480 06:45:03 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:59.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.480 --rc genhtml_branch_coverage=1 00:05:59.480 --rc genhtml_function_coverage=1 00:05:59.480 --rc genhtml_legend=1 00:05:59.480 --rc geninfo_all_blocks=1 00:05:59.480 --rc geninfo_unexecuted_blocks=1 00:05:59.480 00:05:59.480 ' 00:05:59.480 06:45:03 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:59.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.480 --rc genhtml_branch_coverage=1 00:05:59.480 --rc genhtml_function_coverage=1 00:05:59.480 --rc genhtml_legend=1 00:05:59.480 --rc geninfo_all_blocks=1 00:05:59.480 --rc geninfo_unexecuted_blocks=1 00:05:59.480 00:05:59.480 ' 00:05:59.480 06:45:03 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:59.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.480 --rc genhtml_branch_coverage=1 00:05:59.480 --rc genhtml_function_coverage=1 00:05:59.480 --rc genhtml_legend=1 00:05:59.480 --rc geninfo_all_blocks=1 00:05:59.480 --rc geninfo_unexecuted_blocks=1 00:05:59.480 00:05:59.480 ' 00:05:59.480 06:45:03 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:59.480 06:45:03 -- nvmf/common.sh@7 -- # uname -s 00:05:59.480 06:45:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:59.480 06:45:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:59.480 06:45:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:59.480 06:45:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:59.480 06:45:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:59.480 06:45:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:59.480 06:45:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:59.480 06:45:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:59.480 06:45:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:59.480 06:45:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:59.480 06:45:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:657f0c9c-3891-4064-9841-3d87a573b6e7 00:05:59.480 06:45:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=657f0c9c-3891-4064-9841-3d87a573b6e7 00:05:59.480 06:45:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:59.480 06:45:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:59.480 06:45:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:59.480 06:45:03 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:59.480 06:45:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:59.480 06:45:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:59.480 06:45:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:59.480 06:45:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.481 06:45:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.481 06:45:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.481 06:45:03 -- paths/export.sh@5 -- # export PATH 00:05:59.481 06:45:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.481 06:45:03 -- nvmf/common.sh@46 -- # : 0 00:05:59.481 06:45:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:59.481 06:45:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:59.481 06:45:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:59.481 06:45:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:59.481 06:45:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:59.481 06:45:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:59.481 06:45:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:59.481 06:45:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:59.481 06:45:03 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:05:59.481 06:45:03 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:05:59.481 06:45:03 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:05:59.481 06:45:03 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:59.481 06:45:03 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:05:59.481 06:45:03 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:05:59.481 06:45:03 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:59.481 06:45:03 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:05:59.481 06:45:03 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:59.481 06:45:03 -- json_config/json_config.sh@32 -- # declare -A app_params 00:05:59.481 06:45:03 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:59.481 06:45:03 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:05:59.481 06:45:03 -- json_config/json_config.sh@43 -- # last_event_id=0 00:05:59.481 06:45:03 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:59.481 INFO: JSON configuration test init 00:05:59.481 06:45:03 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:05:59.481 06:45:03 -- json_config/json_config.sh@420 -- # json_config_test_init 00:05:59.481 06:45:03 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:05:59.481 06:45:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:59.481 06:45:03 -- common/autotest_common.sh@10 -- # set +x 00:05:59.481 06:45:03 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:05:59.481 06:45:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:59.481 06:45:03 -- common/autotest_common.sh@10 -- # set +x 00:05:59.481 Waiting for target to run... 00:05:59.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:59.481 06:45:03 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:05:59.481 06:45:03 -- json_config/json_config.sh@98 -- # local app=target 00:05:59.481 06:45:03 -- json_config/json_config.sh@99 -- # shift 00:05:59.481 06:45:03 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:59.481 06:45:03 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:59.481 06:45:03 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:59.481 06:45:03 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:59.481 06:45:03 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:59.481 06:45:03 -- json_config/json_config.sh@111 -- # app_pid[$app]=66121 00:05:59.481 06:45:03 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:59.481 06:45:03 -- json_config/json_config.sh@114 -- # waitforlisten 66121 /var/tmp/spdk_tgt.sock 00:05:59.481 06:45:03 -- common/autotest_common.sh@829 -- # '[' -z 66121 ']' 00:05:59.481 06:45:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:59.481 06:45:03 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:59.481 06:45:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:59.481 06:45:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:59.481 06:45:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:59.481 06:45:03 -- common/autotest_common.sh@10 -- # set +x 00:05:59.740 [2024-12-13 06:45:04.019894] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:59.740 [2024-12-13 06:45:04.020180] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66121 ] 00:05:59.998 [2024-12-13 06:45:04.319257] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.998 [2024-12-13 06:45:04.339269] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:59.998 [2024-12-13 06:45:04.339760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.577 06:45:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:00.578 06:45:05 -- common/autotest_common.sh@862 -- # return 0 00:06:00.578 06:45:05 -- json_config/json_config.sh@115 -- # echo '' 00:06:00.578 00:06:00.578 06:45:05 -- json_config/json_config.sh@322 -- # create_accel_config 00:06:00.578 06:45:05 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:06:00.578 06:45:05 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:00.578 06:45:05 -- common/autotest_common.sh@10 -- # set +x 00:06:00.578 06:45:05 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:06:00.578 06:45:05 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:06:00.578 06:45:05 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:00.578 06:45:05 -- common/autotest_common.sh@10 -- # set +x 00:06:00.578 06:45:05 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:00.578 06:45:05 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:06:00.578 06:45:05 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:01.160 06:45:05 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:06:01.160 06:45:05 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:06:01.160 06:45:05 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:01.160 06:45:05 -- common/autotest_common.sh@10 -- # set +x 00:06:01.160 06:45:05 -- json_config/json_config.sh@48 -- # local ret=0 00:06:01.160 06:45:05 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:01.160 06:45:05 -- json_config/json_config.sh@49 -- # local enabled_types 00:06:01.160 06:45:05 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:06:01.160 06:45:05 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:06:01.160 06:45:05 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:01.419 06:45:05 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:01.419 06:45:05 -- json_config/json_config.sh@51 -- # local get_types 00:06:01.419 06:45:05 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:06:01.419 06:45:05 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:06:01.419 06:45:05 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:01.419 06:45:05 -- common/autotest_common.sh@10 -- # set +x 00:06:01.419 06:45:05 -- json_config/json_config.sh@58 -- # return 0 00:06:01.419 06:45:05 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:06:01.419 06:45:05 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:06:01.419 06:45:05 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:06:01.419 06:45:05 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:06:01.419 06:45:05 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:06:01.419 06:45:05 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:06:01.419 06:45:05 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:01.419 06:45:05 -- common/autotest_common.sh@10 -- # set +x 00:06:01.419 06:45:05 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:01.419 06:45:05 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:06:01.419 06:45:05 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:06:01.419 06:45:05 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:01.419 06:45:05 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:01.678 MallocForNvmf0 00:06:01.678 06:45:06 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:01.678 06:45:06 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:01.937 MallocForNvmf1 00:06:01.937 06:45:06 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:01.937 06:45:06 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:02.197 [2024-12-13 06:45:06.647662] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:02.197 06:45:06 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:02.197 06:45:06 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:02.456 06:45:06 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:02.456 06:45:06 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:02.715 06:45:07 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:02.715 06:45:07 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:02.974 06:45:07 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:02.974 06:45:07 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:03.232 [2024-12-13 06:45:07.676354] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:03.233 06:45:07 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:06:03.233 06:45:07 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:03.233 06:45:07 -- common/autotest_common.sh@10 -- # set +x 00:06:03.233 06:45:07 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:06:03.233 06:45:07 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:03.233 06:45:07 -- common/autotest_common.sh@10 -- # set +x 00:06:03.491 06:45:07 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:06:03.491 06:45:07 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:03.491 06:45:07 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:03.491 MallocBdevForConfigChangeCheck 00:06:03.491 06:45:08 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:06:03.491 06:45:08 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:03.491 06:45:08 -- common/autotest_common.sh@10 -- # set +x 00:06:03.750 06:45:08 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:06:03.750 06:45:08 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:04.009 INFO: shutting down applications... 00:06:04.009 06:45:08 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:06:04.009 06:45:08 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:06:04.009 06:45:08 -- json_config/json_config.sh@431 -- # json_config_clear target 00:06:04.009 06:45:08 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:06:04.009 06:45:08 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:04.268 Calling clear_iscsi_subsystem 00:06:04.268 Calling clear_nvmf_subsystem 00:06:04.268 Calling clear_nbd_subsystem 00:06:04.268 Calling clear_ublk_subsystem 00:06:04.268 Calling clear_vhost_blk_subsystem 00:06:04.268 Calling clear_vhost_scsi_subsystem 00:06:04.268 Calling clear_scheduler_subsystem 00:06:04.268 Calling clear_bdev_subsystem 00:06:04.268 Calling clear_accel_subsystem 00:06:04.268 Calling clear_vmd_subsystem 00:06:04.268 Calling clear_sock_subsystem 00:06:04.268 Calling clear_iobuf_subsystem 00:06:04.268 06:45:08 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:06:04.268 06:45:08 -- json_config/json_config.sh@396 -- # count=100 00:06:04.268 06:45:08 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:06:04.268 06:45:08 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:04.268 06:45:08 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:04.268 06:45:08 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:06:04.835 06:45:09 -- json_config/json_config.sh@398 -- # break 00:06:04.835 06:45:09 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:06:04.835 06:45:09 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:06:04.835 06:45:09 -- json_config/json_config.sh@120 -- # local app=target 00:06:04.835 06:45:09 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:06:04.835 06:45:09 -- json_config/json_config.sh@124 -- # [[ -n 66121 ]] 00:06:04.835 06:45:09 -- json_config/json_config.sh@127 -- # kill -SIGINT 66121 00:06:04.835 06:45:09 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:06:04.835 06:45:09 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:06:04.835 06:45:09 -- json_config/json_config.sh@130 -- # kill -0 66121 00:06:04.835 06:45:09 -- json_config/json_config.sh@134 -- # sleep 0.5 00:06:05.403 06:45:09 -- json_config/json_config.sh@129 -- # (( i++ )) 00:06:05.403 06:45:09 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:06:05.403 06:45:09 -- json_config/json_config.sh@130 -- # kill -0 66121 00:06:05.403 06:45:09 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:06:05.403 06:45:09 -- json_config/json_config.sh@132 -- # break 00:06:05.403 06:45:09 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:06:05.403 06:45:09 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:06:05.403 SPDK target shutdown done 00:06:05.403 06:45:09 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:06:05.403 INFO: relaunching applications... 00:06:05.403 06:45:09 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:05.403 06:45:09 -- json_config/json_config.sh@98 -- # local app=target 00:06:05.403 06:45:09 -- json_config/json_config.sh@99 -- # shift 00:06:05.403 06:45:09 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:06:05.403 06:45:09 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:06:05.403 06:45:09 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:06:05.403 06:45:09 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:06:05.403 06:45:09 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:06:05.403 Waiting for target to run... 00:06:05.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:05.403 06:45:09 -- json_config/json_config.sh@111 -- # app_pid[$app]=66317 00:06:05.403 06:45:09 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:05.403 06:45:09 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:06:05.403 06:45:09 -- json_config/json_config.sh@114 -- # waitforlisten 66317 /var/tmp/spdk_tgt.sock 00:06:05.403 06:45:09 -- common/autotest_common.sh@829 -- # '[' -z 66317 ']' 00:06:05.403 06:45:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:05.403 06:45:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:05.403 06:45:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:05.403 06:45:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:05.403 06:45:09 -- common/autotest_common.sh@10 -- # set +x 00:06:05.403 [2024-12-13 06:45:09.765927] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:05.403 [2024-12-13 06:45:09.766244] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66317 ] 00:06:05.663 [2024-12-13 06:45:10.082147] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.663 [2024-12-13 06:45:10.101497] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:05.663 [2024-12-13 06:45:10.101933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.922 [2024-12-13 06:45:10.395608] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:05.922 [2024-12-13 06:45:10.427654] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:06.490 00:06:06.490 INFO: Checking if target configuration is the same... 00:06:06.490 06:45:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:06.490 06:45:10 -- common/autotest_common.sh@862 -- # return 0 00:06:06.490 06:45:10 -- json_config/json_config.sh@115 -- # echo '' 00:06:06.490 06:45:10 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:06:06.490 06:45:10 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:06.490 06:45:10 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:06:06.490 06:45:10 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:06.490 06:45:10 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:06.490 + '[' 2 -ne 2 ']' 00:06:06.490 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:06.490 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:06.490 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:06.490 +++ basename /dev/fd/62 00:06:06.490 ++ mktemp /tmp/62.XXX 00:06:06.490 + tmp_file_1=/tmp/62.cQ5 00:06:06.490 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:06.490 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:06.490 + tmp_file_2=/tmp/spdk_tgt_config.json.YCd 00:06:06.490 + ret=0 00:06:06.490 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:06.748 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:06.748 + diff -u /tmp/62.cQ5 /tmp/spdk_tgt_config.json.YCd 00:06:06.748 INFO: JSON config files are the same 00:06:06.748 + echo 'INFO: JSON config files are the same' 00:06:06.748 + rm /tmp/62.cQ5 /tmp/spdk_tgt_config.json.YCd 00:06:06.748 + exit 0 00:06:06.748 INFO: changing configuration and checking if this can be detected... 00:06:06.748 06:45:11 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:06:06.748 06:45:11 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:06.748 06:45:11 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:06.748 06:45:11 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:07.007 06:45:11 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:07.007 06:45:11 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:06:07.007 06:45:11 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:07.007 + '[' 2 -ne 2 ']' 00:06:07.007 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:07.007 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:07.007 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:07.007 +++ basename /dev/fd/62 00:06:07.007 ++ mktemp /tmp/62.XXX 00:06:07.007 + tmp_file_1=/tmp/62.NT9 00:06:07.007 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:07.007 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:07.007 + tmp_file_2=/tmp/spdk_tgt_config.json.hDS 00:06:07.007 + ret=0 00:06:07.007 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:07.575 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:07.575 + diff -u /tmp/62.NT9 /tmp/spdk_tgt_config.json.hDS 00:06:07.575 + ret=1 00:06:07.575 + echo '=== Start of file: /tmp/62.NT9 ===' 00:06:07.575 + cat /tmp/62.NT9 00:06:07.575 + echo '=== End of file: /tmp/62.NT9 ===' 00:06:07.575 + echo '' 00:06:07.575 + echo '=== Start of file: /tmp/spdk_tgt_config.json.hDS ===' 00:06:07.575 + cat /tmp/spdk_tgt_config.json.hDS 00:06:07.575 + echo '=== End of file: /tmp/spdk_tgt_config.json.hDS ===' 00:06:07.575 + echo '' 00:06:07.575 + rm /tmp/62.NT9 /tmp/spdk_tgt_config.json.hDS 00:06:07.575 + exit 1 00:06:07.575 INFO: configuration change detected. 00:06:07.575 06:45:11 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:06:07.575 06:45:11 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:06:07.575 06:45:11 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:06:07.575 06:45:11 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:07.575 06:45:11 -- common/autotest_common.sh@10 -- # set +x 00:06:07.575 06:45:11 -- json_config/json_config.sh@360 -- # local ret=0 00:06:07.575 06:45:11 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:06:07.575 06:45:11 -- json_config/json_config.sh@370 -- # [[ -n 66317 ]] 00:06:07.575 06:45:11 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:06:07.575 06:45:11 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:06:07.575 06:45:11 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:07.575 06:45:11 -- common/autotest_common.sh@10 -- # set +x 00:06:07.575 06:45:11 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:06:07.575 06:45:11 -- json_config/json_config.sh@246 -- # uname -s 00:06:07.575 06:45:11 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:06:07.575 06:45:11 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:06:07.575 06:45:11 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:06:07.575 06:45:11 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:06:07.575 06:45:11 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:07.575 06:45:11 -- common/autotest_common.sh@10 -- # set +x 00:06:07.575 06:45:11 -- json_config/json_config.sh@376 -- # killprocess 66317 00:06:07.575 06:45:11 -- common/autotest_common.sh@936 -- # '[' -z 66317 ']' 00:06:07.575 06:45:11 -- common/autotest_common.sh@940 -- # kill -0 66317 00:06:07.575 06:45:11 -- common/autotest_common.sh@941 -- # uname 00:06:07.575 06:45:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:07.575 06:45:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66317 00:06:07.575 killing process with pid 66317 00:06:07.575 06:45:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:07.575 06:45:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:07.575 06:45:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66317' 00:06:07.575 06:45:11 -- common/autotest_common.sh@955 -- # kill 66317 00:06:07.575 06:45:11 -- common/autotest_common.sh@960 -- # wait 66317 00:06:07.833 06:45:12 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:07.833 06:45:12 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:06:07.833 06:45:12 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:07.833 06:45:12 -- common/autotest_common.sh@10 -- # set +x 00:06:07.833 INFO: Success 00:06:07.833 06:45:12 -- json_config/json_config.sh@381 -- # return 0 00:06:07.833 06:45:12 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:06:07.833 00:06:07.833 real 0m8.417s 00:06:07.833 user 0m12.326s 00:06:07.833 sys 0m1.488s 00:06:07.833 ************************************ 00:06:07.833 END TEST json_config 00:06:07.833 ************************************ 00:06:07.833 06:45:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:07.833 06:45:12 -- common/autotest_common.sh@10 -- # set +x 00:06:07.833 06:45:12 -- spdk/autotest.sh@166 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:07.833 06:45:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:07.833 06:45:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:07.833 06:45:12 -- common/autotest_common.sh@10 -- # set +x 00:06:07.833 ************************************ 00:06:07.833 START TEST json_config_extra_key 00:06:07.833 ************************************ 00:06:07.833 06:45:12 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:07.833 06:45:12 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:07.833 06:45:12 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:07.833 06:45:12 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:08.091 06:45:12 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:08.091 06:45:12 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:08.091 06:45:12 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:08.091 06:45:12 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:08.091 06:45:12 -- scripts/common.sh@335 -- # IFS=.-: 00:06:08.091 06:45:12 -- scripts/common.sh@335 -- # read -ra ver1 00:06:08.091 06:45:12 -- scripts/common.sh@336 -- # IFS=.-: 00:06:08.091 06:45:12 -- scripts/common.sh@336 -- # read -ra ver2 00:06:08.091 06:45:12 -- scripts/common.sh@337 -- # local 'op=<' 00:06:08.091 06:45:12 -- scripts/common.sh@339 -- # ver1_l=2 00:06:08.091 06:45:12 -- scripts/common.sh@340 -- # ver2_l=1 00:06:08.091 06:45:12 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:08.091 06:45:12 -- scripts/common.sh@343 -- # case "$op" in 00:06:08.091 06:45:12 -- scripts/common.sh@344 -- # : 1 00:06:08.091 06:45:12 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:08.091 06:45:12 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:08.091 06:45:12 -- scripts/common.sh@364 -- # decimal 1 00:06:08.091 06:45:12 -- scripts/common.sh@352 -- # local d=1 00:06:08.091 06:45:12 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:08.091 06:45:12 -- scripts/common.sh@354 -- # echo 1 00:06:08.091 06:45:12 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:08.091 06:45:12 -- scripts/common.sh@365 -- # decimal 2 00:06:08.091 06:45:12 -- scripts/common.sh@352 -- # local d=2 00:06:08.091 06:45:12 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:08.091 06:45:12 -- scripts/common.sh@354 -- # echo 2 00:06:08.091 06:45:12 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:08.091 06:45:12 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:08.091 06:45:12 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:08.091 06:45:12 -- scripts/common.sh@367 -- # return 0 00:06:08.091 06:45:12 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:08.091 06:45:12 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:08.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.091 --rc genhtml_branch_coverage=1 00:06:08.091 --rc genhtml_function_coverage=1 00:06:08.091 --rc genhtml_legend=1 00:06:08.091 --rc geninfo_all_blocks=1 00:06:08.091 --rc geninfo_unexecuted_blocks=1 00:06:08.091 00:06:08.091 ' 00:06:08.091 06:45:12 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:08.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.091 --rc genhtml_branch_coverage=1 00:06:08.091 --rc genhtml_function_coverage=1 00:06:08.091 --rc genhtml_legend=1 00:06:08.091 --rc geninfo_all_blocks=1 00:06:08.091 --rc geninfo_unexecuted_blocks=1 00:06:08.091 00:06:08.091 ' 00:06:08.091 06:45:12 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:08.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.091 --rc genhtml_branch_coverage=1 00:06:08.091 --rc genhtml_function_coverage=1 00:06:08.091 --rc genhtml_legend=1 00:06:08.091 --rc geninfo_all_blocks=1 00:06:08.091 --rc geninfo_unexecuted_blocks=1 00:06:08.091 00:06:08.091 ' 00:06:08.091 06:45:12 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:08.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.091 --rc genhtml_branch_coverage=1 00:06:08.091 --rc genhtml_function_coverage=1 00:06:08.091 --rc genhtml_legend=1 00:06:08.091 --rc geninfo_all_blocks=1 00:06:08.091 --rc geninfo_unexecuted_blocks=1 00:06:08.091 00:06:08.091 ' 00:06:08.091 06:45:12 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:08.091 06:45:12 -- nvmf/common.sh@7 -- # uname -s 00:06:08.091 06:45:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:08.091 06:45:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:08.091 06:45:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:08.091 06:45:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:08.091 06:45:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:08.091 06:45:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:08.091 06:45:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:08.091 06:45:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:08.091 06:45:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:08.091 06:45:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:08.091 06:45:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:657f0c9c-3891-4064-9841-3d87a573b6e7 00:06:08.091 06:45:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=657f0c9c-3891-4064-9841-3d87a573b6e7 00:06:08.091 06:45:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:08.091 06:45:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:08.091 06:45:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:08.091 06:45:12 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:08.091 06:45:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:08.091 06:45:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:08.091 06:45:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:08.091 06:45:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.091 06:45:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.091 06:45:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.091 06:45:12 -- paths/export.sh@5 -- # export PATH 00:06:08.092 06:45:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.092 06:45:12 -- nvmf/common.sh@46 -- # : 0 00:06:08.092 06:45:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:08.092 06:45:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:08.092 06:45:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:08.092 06:45:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:08.092 06:45:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:08.092 06:45:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:08.092 06:45:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:08.092 06:45:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:08.092 06:45:12 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:06:08.092 06:45:12 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:06:08.092 06:45:12 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:08.092 06:45:12 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:06:08.092 06:45:12 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:08.092 06:45:12 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:06:08.092 06:45:12 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:08.092 06:45:12 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:06:08.092 06:45:12 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:08.092 06:45:12 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:06:08.092 INFO: launching applications... 00:06:08.092 06:45:12 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:08.092 06:45:12 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:06:08.092 06:45:12 -- json_config/json_config_extra_key.sh@25 -- # shift 00:06:08.092 06:45:12 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:06:08.092 06:45:12 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:06:08.092 06:45:12 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=66470 00:06:08.092 06:45:12 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:06:08.092 Waiting for target to run... 00:06:08.092 06:45:12 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:08.092 06:45:12 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 66470 /var/tmp/spdk_tgt.sock 00:06:08.092 06:45:12 -- common/autotest_common.sh@829 -- # '[' -z 66470 ']' 00:06:08.092 06:45:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:08.092 06:45:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:08.092 06:45:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:08.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:08.092 06:45:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:08.092 06:45:12 -- common/autotest_common.sh@10 -- # set +x 00:06:08.092 [2024-12-13 06:45:12.480764] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:08.092 [2024-12-13 06:45:12.481040] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66470 ] 00:06:08.350 [2024-12-13 06:45:12.800791] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.350 [2024-12-13 06:45:12.820587] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:08.350 [2024-12-13 06:45:12.820744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.287 00:06:09.287 INFO: shutting down applications... 00:06:09.287 06:45:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:09.287 06:45:13 -- common/autotest_common.sh@862 -- # return 0 00:06:09.287 06:45:13 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:06:09.287 06:45:13 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:06:09.287 06:45:13 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:06:09.287 06:45:13 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:06:09.287 06:45:13 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:06:09.287 06:45:13 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 66470 ]] 00:06:09.287 06:45:13 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 66470 00:06:09.287 06:45:13 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:06:09.287 06:45:13 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:06:09.287 06:45:13 -- json_config/json_config_extra_key.sh@50 -- # kill -0 66470 00:06:09.287 06:45:13 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:06:09.546 06:45:13 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:06:09.546 06:45:13 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:06:09.546 06:45:13 -- json_config/json_config_extra_key.sh@50 -- # kill -0 66470 00:06:09.546 06:45:13 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:06:09.546 06:45:13 -- json_config/json_config_extra_key.sh@52 -- # break 00:06:09.546 06:45:13 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:06:09.546 06:45:13 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:06:09.546 SPDK target shutdown done 00:06:09.546 06:45:13 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:06:09.546 Success 00:06:09.546 00:06:09.546 real 0m1.730s 00:06:09.546 user 0m1.506s 00:06:09.546 sys 0m0.339s 00:06:09.546 ************************************ 00:06:09.546 END TEST json_config_extra_key 00:06:09.546 ************************************ 00:06:09.546 06:45:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:09.546 06:45:13 -- common/autotest_common.sh@10 -- # set +x 00:06:09.546 06:45:14 -- spdk/autotest.sh@167 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:09.546 06:45:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:09.546 06:45:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:09.546 06:45:14 -- common/autotest_common.sh@10 -- # set +x 00:06:09.546 ************************************ 00:06:09.546 START TEST alias_rpc 00:06:09.546 ************************************ 00:06:09.546 06:45:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:09.806 * Looking for test storage... 00:06:09.806 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:09.806 06:45:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:09.806 06:45:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:09.806 06:45:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:09.806 06:45:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:09.806 06:45:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:09.806 06:45:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:09.806 06:45:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:09.806 06:45:14 -- scripts/common.sh@335 -- # IFS=.-: 00:06:09.806 06:45:14 -- scripts/common.sh@335 -- # read -ra ver1 00:06:09.806 06:45:14 -- scripts/common.sh@336 -- # IFS=.-: 00:06:09.806 06:45:14 -- scripts/common.sh@336 -- # read -ra ver2 00:06:09.806 06:45:14 -- scripts/common.sh@337 -- # local 'op=<' 00:06:09.806 06:45:14 -- scripts/common.sh@339 -- # ver1_l=2 00:06:09.806 06:45:14 -- scripts/common.sh@340 -- # ver2_l=1 00:06:09.806 06:45:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:09.806 06:45:14 -- scripts/common.sh@343 -- # case "$op" in 00:06:09.806 06:45:14 -- scripts/common.sh@344 -- # : 1 00:06:09.806 06:45:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:09.806 06:45:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:09.806 06:45:14 -- scripts/common.sh@364 -- # decimal 1 00:06:09.806 06:45:14 -- scripts/common.sh@352 -- # local d=1 00:06:09.806 06:45:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:09.806 06:45:14 -- scripts/common.sh@354 -- # echo 1 00:06:09.806 06:45:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:09.806 06:45:14 -- scripts/common.sh@365 -- # decimal 2 00:06:09.806 06:45:14 -- scripts/common.sh@352 -- # local d=2 00:06:09.806 06:45:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:09.806 06:45:14 -- scripts/common.sh@354 -- # echo 2 00:06:09.806 06:45:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:09.806 06:45:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:09.806 06:45:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:09.806 06:45:14 -- scripts/common.sh@367 -- # return 0 00:06:09.806 06:45:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:09.806 06:45:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:09.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.806 --rc genhtml_branch_coverage=1 00:06:09.806 --rc genhtml_function_coverage=1 00:06:09.806 --rc genhtml_legend=1 00:06:09.806 --rc geninfo_all_blocks=1 00:06:09.806 --rc geninfo_unexecuted_blocks=1 00:06:09.806 00:06:09.806 ' 00:06:09.806 06:45:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:09.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.806 --rc genhtml_branch_coverage=1 00:06:09.806 --rc genhtml_function_coverage=1 00:06:09.806 --rc genhtml_legend=1 00:06:09.806 --rc geninfo_all_blocks=1 00:06:09.806 --rc geninfo_unexecuted_blocks=1 00:06:09.806 00:06:09.806 ' 00:06:09.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.806 06:45:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:09.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.806 --rc genhtml_branch_coverage=1 00:06:09.806 --rc genhtml_function_coverage=1 00:06:09.806 --rc genhtml_legend=1 00:06:09.806 --rc geninfo_all_blocks=1 00:06:09.806 --rc geninfo_unexecuted_blocks=1 00:06:09.806 00:06:09.806 ' 00:06:09.806 06:45:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:09.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.806 --rc genhtml_branch_coverage=1 00:06:09.806 --rc genhtml_function_coverage=1 00:06:09.806 --rc genhtml_legend=1 00:06:09.806 --rc geninfo_all_blocks=1 00:06:09.806 --rc geninfo_unexecuted_blocks=1 00:06:09.806 00:06:09.806 ' 00:06:09.806 06:45:14 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:09.806 06:45:14 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=66536 00:06:09.806 06:45:14 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 66536 00:06:09.806 06:45:14 -- common/autotest_common.sh@829 -- # '[' -z 66536 ']' 00:06:09.806 06:45:14 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:09.806 06:45:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.806 06:45:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:09.806 06:45:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.806 06:45:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:09.806 06:45:14 -- common/autotest_common.sh@10 -- # set +x 00:06:09.806 [2024-12-13 06:45:14.264175] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:09.806 [2024-12-13 06:45:14.264525] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66536 ] 00:06:10.065 [2024-12-13 06:45:14.402448] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.065 [2024-12-13 06:45:14.433416] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:10.065 [2024-12-13 06:45:14.433796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.002 06:45:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:11.002 06:45:15 -- common/autotest_common.sh@862 -- # return 0 00:06:11.002 06:45:15 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:11.002 06:45:15 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 66536 00:06:11.002 06:45:15 -- common/autotest_common.sh@936 -- # '[' -z 66536 ']' 00:06:11.002 06:45:15 -- common/autotest_common.sh@940 -- # kill -0 66536 00:06:11.002 06:45:15 -- common/autotest_common.sh@941 -- # uname 00:06:11.002 06:45:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:11.002 06:45:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66536 00:06:11.003 killing process with pid 66536 00:06:11.003 06:45:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:11.003 06:45:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:11.003 06:45:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66536' 00:06:11.003 06:45:15 -- common/autotest_common.sh@955 -- # kill 66536 00:06:11.003 06:45:15 -- common/autotest_common.sh@960 -- # wait 66536 00:06:11.262 ************************************ 00:06:11.262 END TEST alias_rpc 00:06:11.262 ************************************ 00:06:11.262 00:06:11.262 real 0m1.633s 00:06:11.262 user 0m1.901s 00:06:11.262 sys 0m0.316s 00:06:11.262 06:45:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:11.262 06:45:15 -- common/autotest_common.sh@10 -- # set +x 00:06:11.262 06:45:15 -- spdk/autotest.sh@169 -- # [[ 0 -eq 0 ]] 00:06:11.262 06:45:15 -- spdk/autotest.sh@170 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:11.262 06:45:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:11.262 06:45:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:11.262 06:45:15 -- common/autotest_common.sh@10 -- # set +x 00:06:11.262 ************************************ 00:06:11.262 START TEST spdkcli_tcp 00:06:11.262 ************************************ 00:06:11.262 06:45:15 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:11.262 * Looking for test storage... 00:06:11.262 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:11.262 06:45:15 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:11.521 06:45:15 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:11.521 06:45:15 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:11.521 06:45:15 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:11.521 06:45:15 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:11.521 06:45:15 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:11.521 06:45:15 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:11.521 06:45:15 -- scripts/common.sh@335 -- # IFS=.-: 00:06:11.521 06:45:15 -- scripts/common.sh@335 -- # read -ra ver1 00:06:11.521 06:45:15 -- scripts/common.sh@336 -- # IFS=.-: 00:06:11.521 06:45:15 -- scripts/common.sh@336 -- # read -ra ver2 00:06:11.521 06:45:15 -- scripts/common.sh@337 -- # local 'op=<' 00:06:11.521 06:45:15 -- scripts/common.sh@339 -- # ver1_l=2 00:06:11.521 06:45:15 -- scripts/common.sh@340 -- # ver2_l=1 00:06:11.521 06:45:15 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:11.521 06:45:15 -- scripts/common.sh@343 -- # case "$op" in 00:06:11.521 06:45:15 -- scripts/common.sh@344 -- # : 1 00:06:11.521 06:45:15 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:11.521 06:45:15 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:11.521 06:45:15 -- scripts/common.sh@364 -- # decimal 1 00:06:11.521 06:45:15 -- scripts/common.sh@352 -- # local d=1 00:06:11.521 06:45:15 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:11.521 06:45:15 -- scripts/common.sh@354 -- # echo 1 00:06:11.521 06:45:15 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:11.521 06:45:15 -- scripts/common.sh@365 -- # decimal 2 00:06:11.521 06:45:15 -- scripts/common.sh@352 -- # local d=2 00:06:11.521 06:45:15 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:11.521 06:45:15 -- scripts/common.sh@354 -- # echo 2 00:06:11.521 06:45:15 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:11.521 06:45:15 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:11.521 06:45:15 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:11.521 06:45:15 -- scripts/common.sh@367 -- # return 0 00:06:11.521 06:45:15 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:11.521 06:45:15 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:11.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.521 --rc genhtml_branch_coverage=1 00:06:11.521 --rc genhtml_function_coverage=1 00:06:11.521 --rc genhtml_legend=1 00:06:11.521 --rc geninfo_all_blocks=1 00:06:11.521 --rc geninfo_unexecuted_blocks=1 00:06:11.521 00:06:11.521 ' 00:06:11.521 06:45:15 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:11.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.521 --rc genhtml_branch_coverage=1 00:06:11.521 --rc genhtml_function_coverage=1 00:06:11.521 --rc genhtml_legend=1 00:06:11.521 --rc geninfo_all_blocks=1 00:06:11.521 --rc geninfo_unexecuted_blocks=1 00:06:11.521 00:06:11.521 ' 00:06:11.521 06:45:15 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:11.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.521 --rc genhtml_branch_coverage=1 00:06:11.521 --rc genhtml_function_coverage=1 00:06:11.521 --rc genhtml_legend=1 00:06:11.521 --rc geninfo_all_blocks=1 00:06:11.521 --rc geninfo_unexecuted_blocks=1 00:06:11.521 00:06:11.521 ' 00:06:11.521 06:45:15 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:11.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.521 --rc genhtml_branch_coverage=1 00:06:11.521 --rc genhtml_function_coverage=1 00:06:11.521 --rc genhtml_legend=1 00:06:11.521 --rc geninfo_all_blocks=1 00:06:11.521 --rc geninfo_unexecuted_blocks=1 00:06:11.521 00:06:11.521 ' 00:06:11.521 06:45:15 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:11.521 06:45:15 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:11.521 06:45:15 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:11.521 06:45:15 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:11.521 06:45:15 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:11.521 06:45:15 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:11.521 06:45:15 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:11.521 06:45:15 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:11.521 06:45:15 -- common/autotest_common.sh@10 -- # set +x 00:06:11.521 06:45:15 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=66619 00:06:11.521 06:45:15 -- spdkcli/tcp.sh@27 -- # waitforlisten 66619 00:06:11.521 06:45:15 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:11.521 06:45:15 -- common/autotest_common.sh@829 -- # '[' -z 66619 ']' 00:06:11.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.521 06:45:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.521 06:45:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:11.521 06:45:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.521 06:45:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:11.521 06:45:15 -- common/autotest_common.sh@10 -- # set +x 00:06:11.521 [2024-12-13 06:45:15.955499] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:11.521 [2024-12-13 06:45:15.955605] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66619 ] 00:06:11.781 [2024-12-13 06:45:16.089060] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:11.781 [2024-12-13 06:45:16.120198] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:11.781 [2024-12-13 06:45:16.120501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.781 [2024-12-13 06:45:16.120679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.720 06:45:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:12.720 06:45:16 -- common/autotest_common.sh@862 -- # return 0 00:06:12.720 06:45:16 -- spdkcli/tcp.sh@31 -- # socat_pid=66636 00:06:12.720 06:45:16 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:12.720 06:45:16 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:12.720 [ 00:06:12.720 "bdev_malloc_delete", 00:06:12.720 "bdev_malloc_create", 00:06:12.720 "bdev_null_resize", 00:06:12.720 "bdev_null_delete", 00:06:12.720 "bdev_null_create", 00:06:12.720 "bdev_nvme_cuse_unregister", 00:06:12.720 "bdev_nvme_cuse_register", 00:06:12.720 "bdev_opal_new_user", 00:06:12.720 "bdev_opal_set_lock_state", 00:06:12.720 "bdev_opal_delete", 00:06:12.720 "bdev_opal_get_info", 00:06:12.720 "bdev_opal_create", 00:06:12.720 "bdev_nvme_opal_revert", 00:06:12.720 "bdev_nvme_opal_init", 00:06:12.720 "bdev_nvme_send_cmd", 00:06:12.720 "bdev_nvme_get_path_iostat", 00:06:12.720 "bdev_nvme_get_mdns_discovery_info", 00:06:12.720 "bdev_nvme_stop_mdns_discovery", 00:06:12.720 "bdev_nvme_start_mdns_discovery", 00:06:12.720 "bdev_nvme_set_multipath_policy", 00:06:12.720 "bdev_nvme_set_preferred_path", 00:06:12.720 "bdev_nvme_get_io_paths", 00:06:12.720 "bdev_nvme_remove_error_injection", 00:06:12.720 "bdev_nvme_add_error_injection", 00:06:12.720 "bdev_nvme_get_discovery_info", 00:06:12.720 "bdev_nvme_stop_discovery", 00:06:12.720 "bdev_nvme_start_discovery", 00:06:12.720 "bdev_nvme_get_controller_health_info", 00:06:12.720 "bdev_nvme_disable_controller", 00:06:12.720 "bdev_nvme_enable_controller", 00:06:12.720 "bdev_nvme_reset_controller", 00:06:12.720 "bdev_nvme_get_transport_statistics", 00:06:12.720 "bdev_nvme_apply_firmware", 00:06:12.720 "bdev_nvme_detach_controller", 00:06:12.720 "bdev_nvme_get_controllers", 00:06:12.720 "bdev_nvme_attach_controller", 00:06:12.720 "bdev_nvme_set_hotplug", 00:06:12.720 "bdev_nvme_set_options", 00:06:12.720 "bdev_passthru_delete", 00:06:12.720 "bdev_passthru_create", 00:06:12.720 "bdev_lvol_grow_lvstore", 00:06:12.720 "bdev_lvol_get_lvols", 00:06:12.720 "bdev_lvol_get_lvstores", 00:06:12.720 "bdev_lvol_delete", 00:06:12.720 "bdev_lvol_set_read_only", 00:06:12.720 "bdev_lvol_resize", 00:06:12.720 "bdev_lvol_decouple_parent", 00:06:12.720 "bdev_lvol_inflate", 00:06:12.720 "bdev_lvol_rename", 00:06:12.720 "bdev_lvol_clone_bdev", 00:06:12.720 "bdev_lvol_clone", 00:06:12.720 "bdev_lvol_snapshot", 00:06:12.720 "bdev_lvol_create", 00:06:12.720 "bdev_lvol_delete_lvstore", 00:06:12.720 "bdev_lvol_rename_lvstore", 00:06:12.720 "bdev_lvol_create_lvstore", 00:06:12.720 "bdev_raid_set_options", 00:06:12.720 "bdev_raid_remove_base_bdev", 00:06:12.720 "bdev_raid_add_base_bdev", 00:06:12.720 "bdev_raid_delete", 00:06:12.720 "bdev_raid_create", 00:06:12.720 "bdev_raid_get_bdevs", 00:06:12.720 "bdev_error_inject_error", 00:06:12.720 "bdev_error_delete", 00:06:12.720 "bdev_error_create", 00:06:12.720 "bdev_split_delete", 00:06:12.720 "bdev_split_create", 00:06:12.720 "bdev_delay_delete", 00:06:12.720 "bdev_delay_create", 00:06:12.720 "bdev_delay_update_latency", 00:06:12.720 "bdev_zone_block_delete", 00:06:12.720 "bdev_zone_block_create", 00:06:12.720 "blobfs_create", 00:06:12.720 "blobfs_detect", 00:06:12.720 "blobfs_set_cache_size", 00:06:12.720 "bdev_aio_delete", 00:06:12.720 "bdev_aio_rescan", 00:06:12.720 "bdev_aio_create", 00:06:12.720 "bdev_ftl_set_property", 00:06:12.720 "bdev_ftl_get_properties", 00:06:12.721 "bdev_ftl_get_stats", 00:06:12.721 "bdev_ftl_unmap", 00:06:12.721 "bdev_ftl_unload", 00:06:12.721 "bdev_ftl_delete", 00:06:12.721 "bdev_ftl_load", 00:06:12.721 "bdev_ftl_create", 00:06:12.721 "bdev_virtio_attach_controller", 00:06:12.721 "bdev_virtio_scsi_get_devices", 00:06:12.721 "bdev_virtio_detach_controller", 00:06:12.721 "bdev_virtio_blk_set_hotplug", 00:06:12.721 "bdev_iscsi_delete", 00:06:12.721 "bdev_iscsi_create", 00:06:12.721 "bdev_iscsi_set_options", 00:06:12.721 "bdev_uring_delete", 00:06:12.721 "bdev_uring_create", 00:06:12.721 "accel_error_inject_error", 00:06:12.721 "ioat_scan_accel_module", 00:06:12.721 "dsa_scan_accel_module", 00:06:12.721 "iaa_scan_accel_module", 00:06:12.721 "iscsi_set_options", 00:06:12.721 "iscsi_get_auth_groups", 00:06:12.721 "iscsi_auth_group_remove_secret", 00:06:12.721 "iscsi_auth_group_add_secret", 00:06:12.721 "iscsi_delete_auth_group", 00:06:12.721 "iscsi_create_auth_group", 00:06:12.721 "iscsi_set_discovery_auth", 00:06:12.721 "iscsi_get_options", 00:06:12.721 "iscsi_target_node_request_logout", 00:06:12.721 "iscsi_target_node_set_redirect", 00:06:12.721 "iscsi_target_node_set_auth", 00:06:12.721 "iscsi_target_node_add_lun", 00:06:12.721 "iscsi_get_connections", 00:06:12.721 "iscsi_portal_group_set_auth", 00:06:12.721 "iscsi_start_portal_group", 00:06:12.721 "iscsi_delete_portal_group", 00:06:12.721 "iscsi_create_portal_group", 00:06:12.721 "iscsi_get_portal_groups", 00:06:12.721 "iscsi_delete_target_node", 00:06:12.721 "iscsi_target_node_remove_pg_ig_maps", 00:06:12.721 "iscsi_target_node_add_pg_ig_maps", 00:06:12.721 "iscsi_create_target_node", 00:06:12.721 "iscsi_get_target_nodes", 00:06:12.721 "iscsi_delete_initiator_group", 00:06:12.721 "iscsi_initiator_group_remove_initiators", 00:06:12.721 "iscsi_initiator_group_add_initiators", 00:06:12.721 "iscsi_create_initiator_group", 00:06:12.721 "iscsi_get_initiator_groups", 00:06:12.721 "nvmf_set_crdt", 00:06:12.721 "nvmf_set_config", 00:06:12.721 "nvmf_set_max_subsystems", 00:06:12.721 "nvmf_subsystem_get_listeners", 00:06:12.721 "nvmf_subsystem_get_qpairs", 00:06:12.721 "nvmf_subsystem_get_controllers", 00:06:12.721 "nvmf_get_stats", 00:06:12.721 "nvmf_get_transports", 00:06:12.721 "nvmf_create_transport", 00:06:12.721 "nvmf_get_targets", 00:06:12.721 "nvmf_delete_target", 00:06:12.721 "nvmf_create_target", 00:06:12.721 "nvmf_subsystem_allow_any_host", 00:06:12.721 "nvmf_subsystem_remove_host", 00:06:12.721 "nvmf_subsystem_add_host", 00:06:12.721 "nvmf_subsystem_remove_ns", 00:06:12.721 "nvmf_subsystem_add_ns", 00:06:12.721 "nvmf_subsystem_listener_set_ana_state", 00:06:12.721 "nvmf_discovery_get_referrals", 00:06:12.721 "nvmf_discovery_remove_referral", 00:06:12.721 "nvmf_discovery_add_referral", 00:06:12.721 "nvmf_subsystem_remove_listener", 00:06:12.721 "nvmf_subsystem_add_listener", 00:06:12.721 "nvmf_delete_subsystem", 00:06:12.721 "nvmf_create_subsystem", 00:06:12.721 "nvmf_get_subsystems", 00:06:12.721 "env_dpdk_get_mem_stats", 00:06:12.721 "nbd_get_disks", 00:06:12.721 "nbd_stop_disk", 00:06:12.721 "nbd_start_disk", 00:06:12.721 "ublk_recover_disk", 00:06:12.721 "ublk_get_disks", 00:06:12.721 "ublk_stop_disk", 00:06:12.721 "ublk_start_disk", 00:06:12.721 "ublk_destroy_target", 00:06:12.721 "ublk_create_target", 00:06:12.721 "virtio_blk_create_transport", 00:06:12.721 "virtio_blk_get_transports", 00:06:12.721 "vhost_controller_set_coalescing", 00:06:12.721 "vhost_get_controllers", 00:06:12.721 "vhost_delete_controller", 00:06:12.721 "vhost_create_blk_controller", 00:06:12.721 "vhost_scsi_controller_remove_target", 00:06:12.721 "vhost_scsi_controller_add_target", 00:06:12.721 "vhost_start_scsi_controller", 00:06:12.721 "vhost_create_scsi_controller", 00:06:12.721 "thread_set_cpumask", 00:06:12.721 "framework_get_scheduler", 00:06:12.721 "framework_set_scheduler", 00:06:12.721 "framework_get_reactors", 00:06:12.721 "thread_get_io_channels", 00:06:12.721 "thread_get_pollers", 00:06:12.721 "thread_get_stats", 00:06:12.721 "framework_monitor_context_switch", 00:06:12.721 "spdk_kill_instance", 00:06:12.721 "log_enable_timestamps", 00:06:12.721 "log_get_flags", 00:06:12.721 "log_clear_flag", 00:06:12.721 "log_set_flag", 00:06:12.721 "log_get_level", 00:06:12.721 "log_set_level", 00:06:12.721 "log_get_print_level", 00:06:12.721 "log_set_print_level", 00:06:12.721 "framework_enable_cpumask_locks", 00:06:12.721 "framework_disable_cpumask_locks", 00:06:12.721 "framework_wait_init", 00:06:12.721 "framework_start_init", 00:06:12.721 "scsi_get_devices", 00:06:12.721 "bdev_get_histogram", 00:06:12.721 "bdev_enable_histogram", 00:06:12.721 "bdev_set_qos_limit", 00:06:12.721 "bdev_set_qd_sampling_period", 00:06:12.721 "bdev_get_bdevs", 00:06:12.721 "bdev_reset_iostat", 00:06:12.721 "bdev_get_iostat", 00:06:12.721 "bdev_examine", 00:06:12.721 "bdev_wait_for_examine", 00:06:12.721 "bdev_set_options", 00:06:12.721 "notify_get_notifications", 00:06:12.721 "notify_get_types", 00:06:12.721 "accel_get_stats", 00:06:12.721 "accel_set_options", 00:06:12.721 "accel_set_driver", 00:06:12.721 "accel_crypto_key_destroy", 00:06:12.721 "accel_crypto_keys_get", 00:06:12.721 "accel_crypto_key_create", 00:06:12.721 "accel_assign_opc", 00:06:12.721 "accel_get_module_info", 00:06:12.721 "accel_get_opc_assignments", 00:06:12.721 "vmd_rescan", 00:06:12.721 "vmd_remove_device", 00:06:12.721 "vmd_enable", 00:06:12.721 "sock_set_default_impl", 00:06:12.721 "sock_impl_set_options", 00:06:12.721 "sock_impl_get_options", 00:06:12.721 "iobuf_get_stats", 00:06:12.721 "iobuf_set_options", 00:06:12.721 "framework_get_pci_devices", 00:06:12.721 "framework_get_config", 00:06:12.721 "framework_get_subsystems", 00:06:12.721 "trace_get_info", 00:06:12.721 "trace_get_tpoint_group_mask", 00:06:12.721 "trace_disable_tpoint_group", 00:06:12.721 "trace_enable_tpoint_group", 00:06:12.721 "trace_clear_tpoint_mask", 00:06:12.721 "trace_set_tpoint_mask", 00:06:12.721 "spdk_get_version", 00:06:12.721 "rpc_get_methods" 00:06:12.721 ] 00:06:12.721 06:45:17 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:12.721 06:45:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:12.721 06:45:17 -- common/autotest_common.sh@10 -- # set +x 00:06:12.721 06:45:17 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:12.721 06:45:17 -- spdkcli/tcp.sh@38 -- # killprocess 66619 00:06:12.721 06:45:17 -- common/autotest_common.sh@936 -- # '[' -z 66619 ']' 00:06:12.721 06:45:17 -- common/autotest_common.sh@940 -- # kill -0 66619 00:06:12.721 06:45:17 -- common/autotest_common.sh@941 -- # uname 00:06:12.721 06:45:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:12.721 06:45:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66619 00:06:12.721 killing process with pid 66619 00:06:12.721 06:45:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:12.721 06:45:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:12.721 06:45:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66619' 00:06:12.721 06:45:17 -- common/autotest_common.sh@955 -- # kill 66619 00:06:12.721 06:45:17 -- common/autotest_common.sh@960 -- # wait 66619 00:06:12.980 ************************************ 00:06:12.980 END TEST spdkcli_tcp 00:06:12.980 ************************************ 00:06:12.980 00:06:12.980 real 0m1.763s 00:06:12.980 user 0m3.357s 00:06:12.980 sys 0m0.371s 00:06:12.980 06:45:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:12.980 06:45:17 -- common/autotest_common.sh@10 -- # set +x 00:06:13.239 06:45:17 -- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:13.239 06:45:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:13.239 06:45:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:13.239 06:45:17 -- common/autotest_common.sh@10 -- # set +x 00:06:13.239 ************************************ 00:06:13.239 START TEST dpdk_mem_utility 00:06:13.239 ************************************ 00:06:13.239 06:45:17 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:13.239 * Looking for test storage... 00:06:13.239 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:13.239 06:45:17 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:13.239 06:45:17 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:13.239 06:45:17 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:13.239 06:45:17 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:13.239 06:45:17 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:13.239 06:45:17 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:13.239 06:45:17 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:13.239 06:45:17 -- scripts/common.sh@335 -- # IFS=.-: 00:06:13.239 06:45:17 -- scripts/common.sh@335 -- # read -ra ver1 00:06:13.239 06:45:17 -- scripts/common.sh@336 -- # IFS=.-: 00:06:13.239 06:45:17 -- scripts/common.sh@336 -- # read -ra ver2 00:06:13.239 06:45:17 -- scripts/common.sh@337 -- # local 'op=<' 00:06:13.239 06:45:17 -- scripts/common.sh@339 -- # ver1_l=2 00:06:13.239 06:45:17 -- scripts/common.sh@340 -- # ver2_l=1 00:06:13.239 06:45:17 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:13.239 06:45:17 -- scripts/common.sh@343 -- # case "$op" in 00:06:13.239 06:45:17 -- scripts/common.sh@344 -- # : 1 00:06:13.239 06:45:17 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:13.239 06:45:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:13.239 06:45:17 -- scripts/common.sh@364 -- # decimal 1 00:06:13.239 06:45:17 -- scripts/common.sh@352 -- # local d=1 00:06:13.239 06:45:17 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:13.239 06:45:17 -- scripts/common.sh@354 -- # echo 1 00:06:13.239 06:45:17 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:13.239 06:45:17 -- scripts/common.sh@365 -- # decimal 2 00:06:13.239 06:45:17 -- scripts/common.sh@352 -- # local d=2 00:06:13.239 06:45:17 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:13.239 06:45:17 -- scripts/common.sh@354 -- # echo 2 00:06:13.239 06:45:17 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:13.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.240 06:45:17 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:13.240 06:45:17 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:13.240 06:45:17 -- scripts/common.sh@367 -- # return 0 00:06:13.240 06:45:17 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:13.240 06:45:17 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:13.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.240 --rc genhtml_branch_coverage=1 00:06:13.240 --rc genhtml_function_coverage=1 00:06:13.240 --rc genhtml_legend=1 00:06:13.240 --rc geninfo_all_blocks=1 00:06:13.240 --rc geninfo_unexecuted_blocks=1 00:06:13.240 00:06:13.240 ' 00:06:13.240 06:45:17 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:13.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.240 --rc genhtml_branch_coverage=1 00:06:13.240 --rc genhtml_function_coverage=1 00:06:13.240 --rc genhtml_legend=1 00:06:13.240 --rc geninfo_all_blocks=1 00:06:13.240 --rc geninfo_unexecuted_blocks=1 00:06:13.240 00:06:13.240 ' 00:06:13.240 06:45:17 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:13.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.240 --rc genhtml_branch_coverage=1 00:06:13.240 --rc genhtml_function_coverage=1 00:06:13.240 --rc genhtml_legend=1 00:06:13.240 --rc geninfo_all_blocks=1 00:06:13.240 --rc geninfo_unexecuted_blocks=1 00:06:13.240 00:06:13.240 ' 00:06:13.240 06:45:17 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:13.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.240 --rc genhtml_branch_coverage=1 00:06:13.240 --rc genhtml_function_coverage=1 00:06:13.240 --rc genhtml_legend=1 00:06:13.240 --rc geninfo_all_blocks=1 00:06:13.240 --rc geninfo_unexecuted_blocks=1 00:06:13.240 00:06:13.240 ' 00:06:13.240 06:45:17 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:13.240 06:45:17 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=66717 00:06:13.240 06:45:17 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:13.240 06:45:17 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 66717 00:06:13.240 06:45:17 -- common/autotest_common.sh@829 -- # '[' -z 66717 ']' 00:06:13.240 06:45:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.240 06:45:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:13.240 06:45:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.240 06:45:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:13.240 06:45:17 -- common/autotest_common.sh@10 -- # set +x 00:06:13.499 [2024-12-13 06:45:17.760174] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:13.499 [2024-12-13 06:45:17.760902] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66717 ] 00:06:13.499 [2024-12-13 06:45:17.905331] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.499 [2024-12-13 06:45:17.939614] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:13.499 [2024-12-13 06:45:17.939816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.437 06:45:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:14.437 06:45:18 -- common/autotest_common.sh@862 -- # return 0 00:06:14.437 06:45:18 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:14.437 06:45:18 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:14.437 06:45:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:14.437 06:45:18 -- common/autotest_common.sh@10 -- # set +x 00:06:14.437 { 00:06:14.437 "filename": "/tmp/spdk_mem_dump.txt" 00:06:14.437 } 00:06:14.437 06:45:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:14.437 06:45:18 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:14.437 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:14.437 1 heaps totaling size 814.000000 MiB 00:06:14.437 size: 814.000000 MiB heap id: 0 00:06:14.437 end heaps---------- 00:06:14.437 8 mempools totaling size 598.116089 MiB 00:06:14.437 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:14.437 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:14.437 size: 84.521057 MiB name: bdev_io_66717 00:06:14.437 size: 51.011292 MiB name: evtpool_66717 00:06:14.437 size: 50.003479 MiB name: msgpool_66717 00:06:14.437 size: 21.763794 MiB name: PDU_Pool 00:06:14.437 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:14.437 size: 0.026123 MiB name: Session_Pool 00:06:14.437 end mempools------- 00:06:14.437 6 memzones totaling size 4.142822 MiB 00:06:14.437 size: 1.000366 MiB name: RG_ring_0_66717 00:06:14.437 size: 1.000366 MiB name: RG_ring_1_66717 00:06:14.437 size: 1.000366 MiB name: RG_ring_4_66717 00:06:14.437 size: 1.000366 MiB name: RG_ring_5_66717 00:06:14.437 size: 0.125366 MiB name: RG_ring_2_66717 00:06:14.437 size: 0.015991 MiB name: RG_ring_3_66717 00:06:14.437 end memzones------- 00:06:14.437 06:45:18 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:14.437 heap id: 0 total size: 814.000000 MiB number of busy elements: 300 number of free elements: 15 00:06:14.437 list of free elements. size: 12.471924 MiB 00:06:14.437 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:14.437 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:14.437 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:14.437 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:14.437 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:14.437 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:14.437 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:14.438 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:14.438 element at address: 0x200000200000 with size: 0.832825 MiB 00:06:14.438 element at address: 0x20001aa00000 with size: 0.569702 MiB 00:06:14.438 element at address: 0x20000b200000 with size: 0.488892 MiB 00:06:14.438 element at address: 0x200000800000 with size: 0.486145 MiB 00:06:14.438 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:14.438 element at address: 0x200027e00000 with size: 0.395752 MiB 00:06:14.438 element at address: 0x200003a00000 with size: 0.347839 MiB 00:06:14.438 list of standard malloc elements. size: 199.265503 MiB 00:06:14.438 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:14.438 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:14.438 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:14.438 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:14.438 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:14.438 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:14.438 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:14.438 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:14.438 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:14.438 element at address: 0x2000002d5340 with size: 0.000183 MiB 00:06:14.438 element at address: 0x2000002d5400 with size: 0.000183 MiB 00:06:14.438 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:06:14.438 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:06:14.438 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:06:14.438 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:06:14.438 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:06:14.438 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:06:14.438 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:06:14.438 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:06:14.438 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:06:14.438 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:06:14.438 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:06:14.438 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:06:14.438 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:06:14.438 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:06:14.438 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:06:14.438 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:06:14.438 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:06:14.438 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:06:14.438 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:06:14.438 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:06:14.438 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:06:14.438 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:06:14.438 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:06:14.438 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:06:14.438 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:06:14.438 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:06:14.438 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:06:14.438 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:06:14.438 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:06:14.438 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:06:14.438 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:06:14.438 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:06:14.438 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:06:14.438 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:06:14.438 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:06:14.438 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:06:14.438 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:06:14.438 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:06:14.438 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:06:14.438 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:06:14.438 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:06:14.438 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:06:14.438 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:06:14.438 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:06:14.438 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:06:14.438 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:06:14.438 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:06:14.438 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:06:14.438 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:06:14.438 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:14.438 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:14.438 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:14.438 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:14.438 element at address: 0x20000087c740 with size: 0.000183 MiB 00:06:14.438 element at address: 0x20000087c800 with size: 0.000183 MiB 00:06:14.438 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:06:14.438 element at address: 0x20000087c980 with size: 0.000183 MiB 00:06:14.438 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:06:14.438 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:06:14.438 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:06:14.438 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:06:14.438 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:06:14.438 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:14.438 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:14.438 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:14.438 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:06:14.438 element at address: 0x200003a59180 with size: 0.000183 MiB 00:06:14.438 element at address: 0x200003a59240 with size: 0.000183 MiB 00:06:14.438 element at address: 0x200003a59300 with size: 0.000183 MiB 00:06:14.438 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:06:14.438 element at address: 0x200003a59480 with size: 0.000183 MiB 00:06:14.438 element at address: 0x200003a59540 with size: 0.000183 MiB 00:06:14.438 element at address: 0x200003a59600 with size: 0.000183 MiB 00:06:14.438 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:06:14.438 element at address: 0x200003a59780 with size: 0.000183 MiB 00:06:14.438 element at address: 0x200003a59840 with size: 0.000183 MiB 00:06:14.438 element at address: 0x200003a59900 with size: 0.000183 MiB 00:06:14.438 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:06:14.438 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:06:14.438 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:06:14.438 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:06:14.438 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:06:14.438 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:06:14.438 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:06:14.438 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:06:14.438 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:06:14.438 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:06:14.438 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:06:14.438 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:06:14.438 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:06:14.438 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:06:14.438 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:06:14.438 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:06:14.438 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:06:14.438 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:06:14.438 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:06:14.438 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:06:14.438 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:06:14.438 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:06:14.438 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:06:14.438 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:06:14.438 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:06:14.438 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:06:14.438 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:06:14.438 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:06:14.438 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:06:14.438 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:06:14.438 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:14.438 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:14.438 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:14.438 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:14.438 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:14.438 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:14.438 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:14.438 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:14.438 element at address: 0x20000b27d280 with size: 0.000183 MiB 00:06:14.438 element at address: 0x20000b27d340 with size: 0.000183 MiB 00:06:14.438 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:06:14.438 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:06:14.438 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:06:14.438 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:06:14.438 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:06:14.438 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:06:14.438 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:06:14.438 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:06:14.438 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:14.438 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:14.438 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:14.438 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:14.438 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:14.438 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:14.438 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:14.438 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:06:14.438 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:06:14.438 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:06:14.438 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:06:14.439 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:06:14.439 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:06:14.439 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:06:14.439 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:06:14.439 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:06:14.439 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:06:14.439 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:06:14.439 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:06:14.439 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:06:14.439 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:06:14.439 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:06:14.439 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:06:14.439 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:06:14.439 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:06:14.439 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:06:14.439 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:06:14.439 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:06:14.439 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:06:14.439 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:06:14.439 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:06:14.439 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:06:14.439 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:06:14.439 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:06:14.439 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:06:14.439 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:06:14.439 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:06:14.439 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:06:14.439 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:06:14.439 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:06:14.439 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:06:14.439 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:06:14.439 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:06:14.439 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:06:14.439 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:06:14.439 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:06:14.439 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:06:14.439 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:06:14.439 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:06:14.439 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:06:14.439 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:06:14.439 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:06:14.439 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:06:14.439 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:06:14.439 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:06:14.439 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:06:14.439 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:06:14.439 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:06:14.439 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:06:14.439 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:06:14.439 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:06:14.439 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:06:14.439 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:06:14.439 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:06:14.439 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:06:14.439 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:06:14.439 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:06:14.439 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:06:14.439 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:06:14.439 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:06:14.439 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:06:14.439 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:06:14.439 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:06:14.439 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:06:14.439 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:06:14.439 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:06:14.439 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:06:14.439 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:06:14.439 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:06:14.439 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:14.439 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e65500 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6c1c0 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6c3c0 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:14.439 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:14.439 list of memzone associated elements. size: 602.262573 MiB 00:06:14.439 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:14.439 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:14.439 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:14.439 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:14.439 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:14.440 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_66717_0 00:06:14.440 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:14.440 associated memzone info: size: 48.002930 MiB name: MP_evtpool_66717_0 00:06:14.440 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:14.440 associated memzone info: size: 48.002930 MiB name: MP_msgpool_66717_0 00:06:14.440 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:14.440 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:14.440 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:14.440 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:14.440 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:14.440 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_66717 00:06:14.440 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:14.440 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_66717 00:06:14.440 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:14.440 associated memzone info: size: 1.007996 MiB name: MP_evtpool_66717 00:06:14.440 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:14.440 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:14.440 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:14.440 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:14.440 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:14.440 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:14.440 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:14.440 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:14.440 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:14.440 associated memzone info: size: 1.000366 MiB name: RG_ring_0_66717 00:06:14.440 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:14.440 associated memzone info: size: 1.000366 MiB name: RG_ring_1_66717 00:06:14.440 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:14.440 associated memzone info: size: 1.000366 MiB name: RG_ring_4_66717 00:06:14.440 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:14.440 associated memzone info: size: 1.000366 MiB name: RG_ring_5_66717 00:06:14.440 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:14.440 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_66717 00:06:14.440 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:14.440 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:14.440 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:14.440 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:14.440 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:14.440 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:14.440 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:14.440 associated memzone info: size: 0.125366 MiB name: RG_ring_2_66717 00:06:14.440 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:14.440 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:14.440 element at address: 0x200027e65680 with size: 0.023743 MiB 00:06:14.440 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:14.440 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:14.440 associated memzone info: size: 0.015991 MiB name: RG_ring_3_66717 00:06:14.440 element at address: 0x200027e6b7c0 with size: 0.002441 MiB 00:06:14.440 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:14.440 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:06:14.440 associated memzone info: size: 0.000183 MiB name: MP_msgpool_66717 00:06:14.440 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:14.440 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_66717 00:06:14.440 element at address: 0x200027e6c280 with size: 0.000305 MiB 00:06:14.440 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:14.440 06:45:18 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:14.440 06:45:18 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 66717 00:06:14.440 06:45:18 -- common/autotest_common.sh@936 -- # '[' -z 66717 ']' 00:06:14.440 06:45:18 -- common/autotest_common.sh@940 -- # kill -0 66717 00:06:14.440 06:45:18 -- common/autotest_common.sh@941 -- # uname 00:06:14.440 06:45:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:14.440 06:45:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66717 00:06:14.440 06:45:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:14.440 06:45:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:14.440 06:45:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66717' 00:06:14.440 killing process with pid 66717 00:06:14.440 06:45:18 -- common/autotest_common.sh@955 -- # kill 66717 00:06:14.440 06:45:18 -- common/autotest_common.sh@960 -- # wait 66717 00:06:14.700 00:06:14.700 real 0m1.625s 00:06:14.700 user 0m1.850s 00:06:14.700 sys 0m0.343s 00:06:14.700 06:45:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:14.700 ************************************ 00:06:14.700 END TEST dpdk_mem_utility 00:06:14.700 ************************************ 00:06:14.700 06:45:19 -- common/autotest_common.sh@10 -- # set +x 00:06:14.700 06:45:19 -- spdk/autotest.sh@174 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:14.700 06:45:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:14.700 06:45:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:14.700 06:45:19 -- common/autotest_common.sh@10 -- # set +x 00:06:14.700 ************************************ 00:06:14.700 START TEST event 00:06:14.700 ************************************ 00:06:14.700 06:45:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:14.959 * Looking for test storage... 00:06:14.959 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:14.959 06:45:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:14.959 06:45:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:14.959 06:45:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:14.959 06:45:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:14.959 06:45:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:14.959 06:45:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:14.959 06:45:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:14.959 06:45:19 -- scripts/common.sh@335 -- # IFS=.-: 00:06:14.959 06:45:19 -- scripts/common.sh@335 -- # read -ra ver1 00:06:14.959 06:45:19 -- scripts/common.sh@336 -- # IFS=.-: 00:06:14.959 06:45:19 -- scripts/common.sh@336 -- # read -ra ver2 00:06:14.959 06:45:19 -- scripts/common.sh@337 -- # local 'op=<' 00:06:14.959 06:45:19 -- scripts/common.sh@339 -- # ver1_l=2 00:06:14.959 06:45:19 -- scripts/common.sh@340 -- # ver2_l=1 00:06:14.959 06:45:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:14.959 06:45:19 -- scripts/common.sh@343 -- # case "$op" in 00:06:14.959 06:45:19 -- scripts/common.sh@344 -- # : 1 00:06:14.959 06:45:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:14.959 06:45:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:14.959 06:45:19 -- scripts/common.sh@364 -- # decimal 1 00:06:14.959 06:45:19 -- scripts/common.sh@352 -- # local d=1 00:06:14.959 06:45:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:14.959 06:45:19 -- scripts/common.sh@354 -- # echo 1 00:06:14.959 06:45:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:14.959 06:45:19 -- scripts/common.sh@365 -- # decimal 2 00:06:14.960 06:45:19 -- scripts/common.sh@352 -- # local d=2 00:06:14.960 06:45:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:14.960 06:45:19 -- scripts/common.sh@354 -- # echo 2 00:06:14.960 06:45:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:14.960 06:45:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:14.960 06:45:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:14.960 06:45:19 -- scripts/common.sh@367 -- # return 0 00:06:14.960 06:45:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:14.960 06:45:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:14.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.960 --rc genhtml_branch_coverage=1 00:06:14.960 --rc genhtml_function_coverage=1 00:06:14.960 --rc genhtml_legend=1 00:06:14.960 --rc geninfo_all_blocks=1 00:06:14.960 --rc geninfo_unexecuted_blocks=1 00:06:14.960 00:06:14.960 ' 00:06:14.960 06:45:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:14.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.960 --rc genhtml_branch_coverage=1 00:06:14.960 --rc genhtml_function_coverage=1 00:06:14.960 --rc genhtml_legend=1 00:06:14.960 --rc geninfo_all_blocks=1 00:06:14.960 --rc geninfo_unexecuted_blocks=1 00:06:14.960 00:06:14.960 ' 00:06:14.960 06:45:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:14.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.960 --rc genhtml_branch_coverage=1 00:06:14.960 --rc genhtml_function_coverage=1 00:06:14.960 --rc genhtml_legend=1 00:06:14.960 --rc geninfo_all_blocks=1 00:06:14.960 --rc geninfo_unexecuted_blocks=1 00:06:14.960 00:06:14.960 ' 00:06:14.960 06:45:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:14.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.960 --rc genhtml_branch_coverage=1 00:06:14.960 --rc genhtml_function_coverage=1 00:06:14.960 --rc genhtml_legend=1 00:06:14.960 --rc geninfo_all_blocks=1 00:06:14.960 --rc geninfo_unexecuted_blocks=1 00:06:14.960 00:06:14.960 ' 00:06:14.960 06:45:19 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:14.960 06:45:19 -- bdev/nbd_common.sh@6 -- # set -e 00:06:14.960 06:45:19 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:14.960 06:45:19 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:14.960 06:45:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:14.960 06:45:19 -- common/autotest_common.sh@10 -- # set +x 00:06:14.960 ************************************ 00:06:14.960 START TEST event_perf 00:06:14.960 ************************************ 00:06:14.960 06:45:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:14.960 Running I/O for 1 seconds...[2024-12-13 06:45:19.417310] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:14.960 [2024-12-13 06:45:19.417919] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66790 ] 00:06:15.219 [2024-12-13 06:45:19.559263] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:15.219 [2024-12-13 06:45:19.597462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.219 [2024-12-13 06:45:19.597610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:15.219 [2024-12-13 06:45:19.597675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.219 [2024-12-13 06:45:19.597674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:16.201 Running I/O for 1 seconds... 00:06:16.201 lcore 0: 195834 00:06:16.201 lcore 1: 195835 00:06:16.201 lcore 2: 195834 00:06:16.201 lcore 3: 195836 00:06:16.201 done. 00:06:16.201 00:06:16.201 real 0m1.250s 00:06:16.201 user 0m4.082s 00:06:16.201 sys 0m0.046s 00:06:16.201 06:45:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:16.201 06:45:20 -- common/autotest_common.sh@10 -- # set +x 00:06:16.201 ************************************ 00:06:16.201 END TEST event_perf 00:06:16.201 ************************************ 00:06:16.201 06:45:20 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:16.201 06:45:20 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:16.201 06:45:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:16.201 06:45:20 -- common/autotest_common.sh@10 -- # set +x 00:06:16.201 ************************************ 00:06:16.201 START TEST event_reactor 00:06:16.201 ************************************ 00:06:16.201 06:45:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:16.201 [2024-12-13 06:45:20.713184] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:16.201 [2024-12-13 06:45:20.713482] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66834 ] 00:06:16.460 [2024-12-13 06:45:20.849283] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.460 [2024-12-13 06:45:20.882054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.840 test_start 00:06:17.840 oneshot 00:06:17.840 tick 100 00:06:17.840 tick 100 00:06:17.840 tick 250 00:06:17.840 tick 100 00:06:17.840 tick 100 00:06:17.840 tick 100 00:06:17.840 tick 250 00:06:17.840 tick 500 00:06:17.840 tick 100 00:06:17.840 tick 100 00:06:17.840 tick 250 00:06:17.840 tick 100 00:06:17.840 tick 100 00:06:17.840 test_end 00:06:17.840 00:06:17.840 real 0m1.238s 00:06:17.840 user 0m1.089s 00:06:17.840 sys 0m0.042s 00:06:17.840 ************************************ 00:06:17.840 END TEST event_reactor 00:06:17.840 ************************************ 00:06:17.840 06:45:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:17.840 06:45:21 -- common/autotest_common.sh@10 -- # set +x 00:06:17.840 06:45:21 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:17.840 06:45:21 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:17.840 06:45:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:17.840 06:45:21 -- common/autotest_common.sh@10 -- # set +x 00:06:17.840 ************************************ 00:06:17.840 START TEST event_reactor_perf 00:06:17.840 ************************************ 00:06:17.840 06:45:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:17.840 [2024-12-13 06:45:22.006517] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:17.840 [2024-12-13 06:45:22.006607] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66864 ] 00:06:17.840 [2024-12-13 06:45:22.145620] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.840 [2024-12-13 06:45:22.186447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.777 test_start 00:06:18.777 test_end 00:06:18.777 Performance: 413880 events per second 00:06:18.777 00:06:18.777 real 0m1.249s 00:06:18.777 user 0m1.096s 00:06:18.777 sys 0m0.045s 00:06:18.777 06:45:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:18.778 06:45:23 -- common/autotest_common.sh@10 -- # set +x 00:06:18.778 ************************************ 00:06:18.778 END TEST event_reactor_perf 00:06:18.778 ************************************ 00:06:18.778 06:45:23 -- event/event.sh@49 -- # uname -s 00:06:18.778 06:45:23 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:18.778 06:45:23 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:18.778 06:45:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:18.778 06:45:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:18.778 06:45:23 -- common/autotest_common.sh@10 -- # set +x 00:06:19.037 ************************************ 00:06:19.037 START TEST event_scheduler 00:06:19.037 ************************************ 00:06:19.037 06:45:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:19.037 * Looking for test storage... 00:06:19.037 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:19.037 06:45:23 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:19.037 06:45:23 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:19.037 06:45:23 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:19.037 06:45:23 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:19.037 06:45:23 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:19.037 06:45:23 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:19.037 06:45:23 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:19.037 06:45:23 -- scripts/common.sh@335 -- # IFS=.-: 00:06:19.037 06:45:23 -- scripts/common.sh@335 -- # read -ra ver1 00:06:19.037 06:45:23 -- scripts/common.sh@336 -- # IFS=.-: 00:06:19.037 06:45:23 -- scripts/common.sh@336 -- # read -ra ver2 00:06:19.037 06:45:23 -- scripts/common.sh@337 -- # local 'op=<' 00:06:19.037 06:45:23 -- scripts/common.sh@339 -- # ver1_l=2 00:06:19.037 06:45:23 -- scripts/common.sh@340 -- # ver2_l=1 00:06:19.037 06:45:23 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:19.037 06:45:23 -- scripts/common.sh@343 -- # case "$op" in 00:06:19.037 06:45:23 -- scripts/common.sh@344 -- # : 1 00:06:19.037 06:45:23 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:19.037 06:45:23 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:19.037 06:45:23 -- scripts/common.sh@364 -- # decimal 1 00:06:19.037 06:45:23 -- scripts/common.sh@352 -- # local d=1 00:06:19.037 06:45:23 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:19.037 06:45:23 -- scripts/common.sh@354 -- # echo 1 00:06:19.037 06:45:23 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:19.037 06:45:23 -- scripts/common.sh@365 -- # decimal 2 00:06:19.037 06:45:23 -- scripts/common.sh@352 -- # local d=2 00:06:19.037 06:45:23 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:19.037 06:45:23 -- scripts/common.sh@354 -- # echo 2 00:06:19.037 06:45:23 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:19.037 06:45:23 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:19.037 06:45:23 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:19.037 06:45:23 -- scripts/common.sh@367 -- # return 0 00:06:19.037 06:45:23 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:19.037 06:45:23 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:19.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.037 --rc genhtml_branch_coverage=1 00:06:19.037 --rc genhtml_function_coverage=1 00:06:19.037 --rc genhtml_legend=1 00:06:19.037 --rc geninfo_all_blocks=1 00:06:19.037 --rc geninfo_unexecuted_blocks=1 00:06:19.037 00:06:19.037 ' 00:06:19.037 06:45:23 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:19.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.037 --rc genhtml_branch_coverage=1 00:06:19.037 --rc genhtml_function_coverage=1 00:06:19.037 --rc genhtml_legend=1 00:06:19.037 --rc geninfo_all_blocks=1 00:06:19.037 --rc geninfo_unexecuted_blocks=1 00:06:19.037 00:06:19.037 ' 00:06:19.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.037 06:45:23 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:19.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.037 --rc genhtml_branch_coverage=1 00:06:19.037 --rc genhtml_function_coverage=1 00:06:19.037 --rc genhtml_legend=1 00:06:19.037 --rc geninfo_all_blocks=1 00:06:19.037 --rc geninfo_unexecuted_blocks=1 00:06:19.037 00:06:19.037 ' 00:06:19.037 06:45:23 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:19.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.037 --rc genhtml_branch_coverage=1 00:06:19.037 --rc genhtml_function_coverage=1 00:06:19.037 --rc genhtml_legend=1 00:06:19.037 --rc geninfo_all_blocks=1 00:06:19.037 --rc geninfo_unexecuted_blocks=1 00:06:19.037 00:06:19.037 ' 00:06:19.037 06:45:23 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:19.037 06:45:23 -- scheduler/scheduler.sh@35 -- # scheduler_pid=66927 00:06:19.037 06:45:23 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:19.037 06:45:23 -- scheduler/scheduler.sh@37 -- # waitforlisten 66927 00:06:19.037 06:45:23 -- common/autotest_common.sh@829 -- # '[' -z 66927 ']' 00:06:19.037 06:45:23 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:19.037 06:45:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.037 06:45:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:19.037 06:45:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.037 06:45:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:19.037 06:45:23 -- common/autotest_common.sh@10 -- # set +x 00:06:19.037 [2024-12-13 06:45:23.533719] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:19.037 [2024-12-13 06:45:23.533971] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66927 ] 00:06:19.296 [2024-12-13 06:45:23.676248] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:19.296 [2024-12-13 06:45:23.718848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.296 [2024-12-13 06:45:23.718987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.296 [2024-12-13 06:45:23.720406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:19.296 [2024-12-13 06:45:23.720427] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:19.296 06:45:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:19.296 06:45:23 -- common/autotest_common.sh@862 -- # return 0 00:06:19.296 06:45:23 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:19.296 06:45:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.296 06:45:23 -- common/autotest_common.sh@10 -- # set +x 00:06:19.296 POWER: Env isn't set yet! 00:06:19.296 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:19.296 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:19.296 POWER: Cannot set governor of lcore 0 to userspace 00:06:19.296 POWER: Attempting to initialise PSTAT power management... 00:06:19.296 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:19.296 POWER: Cannot set governor of lcore 0 to performance 00:06:19.296 POWER: Attempting to initialise AMD PSTATE power management... 00:06:19.296 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:19.296 POWER: Cannot set governor of lcore 0 to userspace 00:06:19.296 POWER: Attempting to initialise CPPC power management... 00:06:19.296 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:19.296 POWER: Cannot set governor of lcore 0 to userspace 00:06:19.296 POWER: Attempting to initialise VM power management... 00:06:19.296 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:19.296 POWER: Unable to set Power Management Environment for lcore 0 00:06:19.296 [2024-12-13 06:45:23.777302] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:06:19.296 [2024-12-13 06:45:23.777317] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:06:19.297 [2024-12-13 06:45:23.777329] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:06:19.297 [2024-12-13 06:45:23.777343] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:19.297 [2024-12-13 06:45:23.777370] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:19.297 [2024-12-13 06:45:23.777379] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:19.297 06:45:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.297 06:45:23 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:19.297 06:45:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.297 06:45:23 -- common/autotest_common.sh@10 -- # set +x 00:06:19.555 [2024-12-13 06:45:23.833021] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:19.555 06:45:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.555 06:45:23 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:19.555 06:45:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:19.555 06:45:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:19.555 06:45:23 -- common/autotest_common.sh@10 -- # set +x 00:06:19.555 ************************************ 00:06:19.555 START TEST scheduler_create_thread 00:06:19.555 ************************************ 00:06:19.555 06:45:23 -- common/autotest_common.sh@1114 -- # scheduler_create_thread 00:06:19.555 06:45:23 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:19.555 06:45:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.555 06:45:23 -- common/autotest_common.sh@10 -- # set +x 00:06:19.555 2 00:06:19.555 06:45:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.555 06:45:23 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:19.555 06:45:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.555 06:45:23 -- common/autotest_common.sh@10 -- # set +x 00:06:19.555 3 00:06:19.555 06:45:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.555 06:45:23 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:19.555 06:45:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.555 06:45:23 -- common/autotest_common.sh@10 -- # set +x 00:06:19.555 4 00:06:19.555 06:45:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.555 06:45:23 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:19.555 06:45:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.555 06:45:23 -- common/autotest_common.sh@10 -- # set +x 00:06:19.555 5 00:06:19.555 06:45:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.555 06:45:23 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:19.555 06:45:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.555 06:45:23 -- common/autotest_common.sh@10 -- # set +x 00:06:19.555 6 00:06:19.555 06:45:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.555 06:45:23 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:19.555 06:45:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.555 06:45:23 -- common/autotest_common.sh@10 -- # set +x 00:06:19.555 7 00:06:19.555 06:45:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.555 06:45:23 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:19.555 06:45:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.555 06:45:23 -- common/autotest_common.sh@10 -- # set +x 00:06:19.555 8 00:06:19.555 06:45:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.555 06:45:23 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:19.555 06:45:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.555 06:45:23 -- common/autotest_common.sh@10 -- # set +x 00:06:19.555 9 00:06:19.555 06:45:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.555 06:45:23 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:19.555 06:45:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.555 06:45:23 -- common/autotest_common.sh@10 -- # set +x 00:06:19.555 10 00:06:19.555 06:45:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.555 06:45:23 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:19.555 06:45:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.555 06:45:23 -- common/autotest_common.sh@10 -- # set +x 00:06:19.555 06:45:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.555 06:45:23 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:19.556 06:45:23 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:19.556 06:45:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.556 06:45:23 -- common/autotest_common.sh@10 -- # set +x 00:06:19.556 06:45:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.556 06:45:23 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:19.556 06:45:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.556 06:45:23 -- common/autotest_common.sh@10 -- # set +x 00:06:20.931 06:45:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:20.931 06:45:25 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:20.931 06:45:25 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:20.931 06:45:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:20.931 06:45:25 -- common/autotest_common.sh@10 -- # set +x 00:06:22.307 ************************************ 00:06:22.307 END TEST scheduler_create_thread 00:06:22.307 ************************************ 00:06:22.307 06:45:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.307 00:06:22.307 real 0m2.612s 00:06:22.307 user 0m0.019s 00:06:22.307 sys 0m0.004s 00:06:22.307 06:45:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:22.307 06:45:26 -- common/autotest_common.sh@10 -- # set +x 00:06:22.307 06:45:26 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:22.307 06:45:26 -- scheduler/scheduler.sh@46 -- # killprocess 66927 00:06:22.307 06:45:26 -- common/autotest_common.sh@936 -- # '[' -z 66927 ']' 00:06:22.307 06:45:26 -- common/autotest_common.sh@940 -- # kill -0 66927 00:06:22.307 06:45:26 -- common/autotest_common.sh@941 -- # uname 00:06:22.307 06:45:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:22.307 06:45:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66927 00:06:22.307 killing process with pid 66927 00:06:22.307 06:45:26 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:22.307 06:45:26 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:22.307 06:45:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66927' 00:06:22.307 06:45:26 -- common/autotest_common.sh@955 -- # kill 66927 00:06:22.307 06:45:26 -- common/autotest_common.sh@960 -- # wait 66927 00:06:22.566 [2024-12-13 06:45:26.935917] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:22.566 ************************************ 00:06:22.566 END TEST event_scheduler 00:06:22.566 ************************************ 00:06:22.566 00:06:22.566 real 0m3.784s 00:06:22.566 user 0m5.605s 00:06:22.566 sys 0m0.274s 00:06:22.566 06:45:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:22.566 06:45:27 -- common/autotest_common.sh@10 -- # set +x 00:06:22.825 06:45:27 -- event/event.sh@51 -- # modprobe -n nbd 00:06:22.825 06:45:27 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:22.825 06:45:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:22.825 06:45:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:22.825 06:45:27 -- common/autotest_common.sh@10 -- # set +x 00:06:22.825 ************************************ 00:06:22.825 START TEST app_repeat 00:06:22.825 ************************************ 00:06:22.825 06:45:27 -- common/autotest_common.sh@1114 -- # app_repeat_test 00:06:22.825 06:45:27 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.825 06:45:27 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.825 06:45:27 -- event/event.sh@13 -- # local nbd_list 00:06:22.825 06:45:27 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:22.825 06:45:27 -- event/event.sh@14 -- # local bdev_list 00:06:22.825 06:45:27 -- event/event.sh@15 -- # local repeat_times=4 00:06:22.825 06:45:27 -- event/event.sh@17 -- # modprobe nbd 00:06:22.826 06:45:27 -- event/event.sh@19 -- # repeat_pid=67019 00:06:22.826 06:45:27 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:22.826 06:45:27 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:22.826 Process app_repeat pid: 67019 00:06:22.826 06:45:27 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 67019' 00:06:22.826 spdk_app_start Round 0 00:06:22.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:22.826 06:45:27 -- event/event.sh@23 -- # for i in {0..2} 00:06:22.826 06:45:27 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:22.826 06:45:27 -- event/event.sh@25 -- # waitforlisten 67019 /var/tmp/spdk-nbd.sock 00:06:22.826 06:45:27 -- common/autotest_common.sh@829 -- # '[' -z 67019 ']' 00:06:22.826 06:45:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:22.826 06:45:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:22.826 06:45:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:22.826 06:45:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:22.826 06:45:27 -- common/autotest_common.sh@10 -- # set +x 00:06:22.826 [2024-12-13 06:45:27.160259] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:22.826 [2024-12-13 06:45:27.160909] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67019 ] 00:06:22.826 [2024-12-13 06:45:27.293120] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:22.826 [2024-12-13 06:45:27.327066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.826 [2024-12-13 06:45:27.327076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.085 06:45:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:23.085 06:45:27 -- common/autotest_common.sh@862 -- # return 0 00:06:23.085 06:45:27 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:23.344 Malloc0 00:06:23.344 06:45:27 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:23.603 Malloc1 00:06:23.603 06:45:27 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:23.603 06:45:27 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.603 06:45:27 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:23.603 06:45:27 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:23.603 06:45:27 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.603 06:45:27 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:23.603 06:45:27 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:23.603 06:45:27 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.603 06:45:27 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:23.603 06:45:27 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:23.603 06:45:27 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.603 06:45:27 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:23.603 06:45:27 -- bdev/nbd_common.sh@12 -- # local i 00:06:23.603 06:45:27 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:23.603 06:45:27 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:23.603 06:45:27 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:23.862 /dev/nbd0 00:06:23.862 06:45:28 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:23.862 06:45:28 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:23.862 06:45:28 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:23.862 06:45:28 -- common/autotest_common.sh@867 -- # local i 00:06:23.862 06:45:28 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:23.862 06:45:28 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:23.862 06:45:28 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:23.862 06:45:28 -- common/autotest_common.sh@871 -- # break 00:06:23.862 06:45:28 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:23.862 06:45:28 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:23.862 06:45:28 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:23.862 1+0 records in 00:06:23.862 1+0 records out 00:06:23.862 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000319404 s, 12.8 MB/s 00:06:23.862 06:45:28 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:23.862 06:45:28 -- common/autotest_common.sh@884 -- # size=4096 00:06:23.862 06:45:28 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:23.862 06:45:28 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:23.862 06:45:28 -- common/autotest_common.sh@887 -- # return 0 00:06:23.862 06:45:28 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:23.862 06:45:28 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:23.862 06:45:28 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:24.121 /dev/nbd1 00:06:24.121 06:45:28 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:24.121 06:45:28 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:24.121 06:45:28 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:24.121 06:45:28 -- common/autotest_common.sh@867 -- # local i 00:06:24.121 06:45:28 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:24.121 06:45:28 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:24.121 06:45:28 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:24.121 06:45:28 -- common/autotest_common.sh@871 -- # break 00:06:24.121 06:45:28 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:24.121 06:45:28 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:24.121 06:45:28 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:24.121 1+0 records in 00:06:24.121 1+0 records out 00:06:24.121 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000300764 s, 13.6 MB/s 00:06:24.121 06:45:28 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:24.121 06:45:28 -- common/autotest_common.sh@884 -- # size=4096 00:06:24.121 06:45:28 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:24.121 06:45:28 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:24.121 06:45:28 -- common/autotest_common.sh@887 -- # return 0 00:06:24.121 06:45:28 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:24.121 06:45:28 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:24.121 06:45:28 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:24.121 06:45:28 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.121 06:45:28 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:24.381 06:45:28 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:24.381 { 00:06:24.381 "nbd_device": "/dev/nbd0", 00:06:24.381 "bdev_name": "Malloc0" 00:06:24.381 }, 00:06:24.381 { 00:06:24.381 "nbd_device": "/dev/nbd1", 00:06:24.381 "bdev_name": "Malloc1" 00:06:24.381 } 00:06:24.381 ]' 00:06:24.381 06:45:28 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:24.381 { 00:06:24.381 "nbd_device": "/dev/nbd0", 00:06:24.381 "bdev_name": "Malloc0" 00:06:24.381 }, 00:06:24.381 { 00:06:24.381 "nbd_device": "/dev/nbd1", 00:06:24.381 "bdev_name": "Malloc1" 00:06:24.381 } 00:06:24.381 ]' 00:06:24.381 06:45:28 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:24.381 06:45:28 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:24.381 /dev/nbd1' 00:06:24.381 06:45:28 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:24.381 /dev/nbd1' 00:06:24.381 06:45:28 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:24.381 06:45:28 -- bdev/nbd_common.sh@65 -- # count=2 00:06:24.381 06:45:28 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:24.381 06:45:28 -- bdev/nbd_common.sh@95 -- # count=2 00:06:24.381 06:45:28 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:24.381 06:45:28 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:24.381 06:45:28 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.381 06:45:28 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:24.381 06:45:28 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:24.381 06:45:28 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:24.381 06:45:28 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:24.381 06:45:28 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:24.381 256+0 records in 00:06:24.381 256+0 records out 00:06:24.381 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00950144 s, 110 MB/s 00:06:24.381 06:45:28 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:24.381 06:45:28 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:24.381 256+0 records in 00:06:24.381 256+0 records out 00:06:24.381 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0243784 s, 43.0 MB/s 00:06:24.381 06:45:28 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:24.381 06:45:28 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:24.381 256+0 records in 00:06:24.381 256+0 records out 00:06:24.381 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0262166 s, 40.0 MB/s 00:06:24.381 06:45:28 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:24.381 06:45:28 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.381 06:45:28 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:24.381 06:45:28 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:24.381 06:45:28 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:24.381 06:45:28 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:24.381 06:45:28 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:24.381 06:45:28 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:24.381 06:45:28 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:24.381 06:45:28 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:24.381 06:45:28 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:24.381 06:45:28 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:24.381 06:45:28 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:24.381 06:45:28 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.381 06:45:28 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.381 06:45:28 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:24.381 06:45:28 -- bdev/nbd_common.sh@51 -- # local i 00:06:24.381 06:45:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:24.381 06:45:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:24.640 06:45:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:24.640 06:45:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:24.640 06:45:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:24.640 06:45:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:24.640 06:45:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:24.640 06:45:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:24.640 06:45:29 -- bdev/nbd_common.sh@41 -- # break 00:06:24.640 06:45:29 -- bdev/nbd_common.sh@45 -- # return 0 00:06:24.640 06:45:29 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:24.640 06:45:29 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:24.899 06:45:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:24.899 06:45:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:24.899 06:45:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:24.899 06:45:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:24.899 06:45:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:24.899 06:45:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:24.899 06:45:29 -- bdev/nbd_common.sh@41 -- # break 00:06:24.899 06:45:29 -- bdev/nbd_common.sh@45 -- # return 0 00:06:24.899 06:45:29 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:24.899 06:45:29 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.899 06:45:29 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:25.502 06:45:29 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:25.502 06:45:29 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:25.502 06:45:29 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:25.502 06:45:29 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:25.502 06:45:29 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:25.502 06:45:29 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:25.502 06:45:29 -- bdev/nbd_common.sh@65 -- # true 00:06:25.502 06:45:29 -- bdev/nbd_common.sh@65 -- # count=0 00:06:25.502 06:45:29 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:25.502 06:45:29 -- bdev/nbd_common.sh@104 -- # count=0 00:06:25.502 06:45:29 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:25.502 06:45:29 -- bdev/nbd_common.sh@109 -- # return 0 00:06:25.502 06:45:29 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:25.772 06:45:30 -- event/event.sh@35 -- # sleep 3 00:06:25.772 [2024-12-13 06:45:30.172573] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:25.772 [2024-12-13 06:45:30.202283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.772 [2024-12-13 06:45:30.202294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.772 [2024-12-13 06:45:30.232612] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:25.772 [2024-12-13 06:45:30.232917] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:29.063 06:45:33 -- event/event.sh@23 -- # for i in {0..2} 00:06:29.063 06:45:33 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:29.063 spdk_app_start Round 1 00:06:29.063 06:45:33 -- event/event.sh@25 -- # waitforlisten 67019 /var/tmp/spdk-nbd.sock 00:06:29.063 06:45:33 -- common/autotest_common.sh@829 -- # '[' -z 67019 ']' 00:06:29.063 06:45:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:29.063 06:45:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:29.063 06:45:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:29.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:29.063 06:45:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:29.063 06:45:33 -- common/autotest_common.sh@10 -- # set +x 00:06:29.063 06:45:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:29.063 06:45:33 -- common/autotest_common.sh@862 -- # return 0 00:06:29.063 06:45:33 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:29.063 Malloc0 00:06:29.063 06:45:33 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:29.324 Malloc1 00:06:29.324 06:45:33 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:29.324 06:45:33 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.324 06:45:33 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:29.324 06:45:33 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:29.324 06:45:33 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.324 06:45:33 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:29.324 06:45:33 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:29.324 06:45:33 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.324 06:45:33 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:29.324 06:45:33 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:29.324 06:45:33 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.324 06:45:33 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:29.324 06:45:33 -- bdev/nbd_common.sh@12 -- # local i 00:06:29.324 06:45:33 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:29.324 06:45:33 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:29.324 06:45:33 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:29.583 /dev/nbd0 00:06:29.583 06:45:34 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:29.583 06:45:34 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:29.583 06:45:34 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:29.583 06:45:34 -- common/autotest_common.sh@867 -- # local i 00:06:29.583 06:45:34 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:29.583 06:45:34 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:29.583 06:45:34 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:29.583 06:45:34 -- common/autotest_common.sh@871 -- # break 00:06:29.583 06:45:34 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:29.583 06:45:34 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:29.583 06:45:34 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:29.583 1+0 records in 00:06:29.583 1+0 records out 00:06:29.583 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000248811 s, 16.5 MB/s 00:06:29.583 06:45:34 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:29.583 06:45:34 -- common/autotest_common.sh@884 -- # size=4096 00:06:29.583 06:45:34 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:29.583 06:45:34 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:29.583 06:45:34 -- common/autotest_common.sh@887 -- # return 0 00:06:29.583 06:45:34 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:29.583 06:45:34 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:29.583 06:45:34 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:29.843 /dev/nbd1 00:06:29.843 06:45:34 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:29.843 06:45:34 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:29.843 06:45:34 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:29.843 06:45:34 -- common/autotest_common.sh@867 -- # local i 00:06:29.843 06:45:34 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:29.843 06:45:34 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:29.843 06:45:34 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:29.843 06:45:34 -- common/autotest_common.sh@871 -- # break 00:06:29.843 06:45:34 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:29.843 06:45:34 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:29.843 06:45:34 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:29.843 1+0 records in 00:06:29.843 1+0 records out 00:06:29.843 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000171525 s, 23.9 MB/s 00:06:29.843 06:45:34 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:29.843 06:45:34 -- common/autotest_common.sh@884 -- # size=4096 00:06:29.843 06:45:34 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:29.843 06:45:34 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:29.843 06:45:34 -- common/autotest_common.sh@887 -- # return 0 00:06:29.843 06:45:34 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:29.843 06:45:34 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:29.843 06:45:34 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:29.843 06:45:34 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.843 06:45:34 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:30.411 06:45:34 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:30.411 { 00:06:30.411 "nbd_device": "/dev/nbd0", 00:06:30.411 "bdev_name": "Malloc0" 00:06:30.411 }, 00:06:30.411 { 00:06:30.411 "nbd_device": "/dev/nbd1", 00:06:30.411 "bdev_name": "Malloc1" 00:06:30.411 } 00:06:30.411 ]' 00:06:30.411 06:45:34 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:30.411 06:45:34 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:30.411 { 00:06:30.411 "nbd_device": "/dev/nbd0", 00:06:30.411 "bdev_name": "Malloc0" 00:06:30.411 }, 00:06:30.411 { 00:06:30.411 "nbd_device": "/dev/nbd1", 00:06:30.411 "bdev_name": "Malloc1" 00:06:30.411 } 00:06:30.411 ]' 00:06:30.411 06:45:34 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:30.411 /dev/nbd1' 00:06:30.411 06:45:34 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:30.411 06:45:34 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:30.411 /dev/nbd1' 00:06:30.411 06:45:34 -- bdev/nbd_common.sh@65 -- # count=2 00:06:30.411 06:45:34 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:30.411 06:45:34 -- bdev/nbd_common.sh@95 -- # count=2 00:06:30.411 06:45:34 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:30.411 06:45:34 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:30.411 06:45:34 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.411 06:45:34 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:30.411 06:45:34 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:30.411 06:45:34 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:30.411 06:45:34 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:30.411 06:45:34 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:30.411 256+0 records in 00:06:30.411 256+0 records out 00:06:30.411 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0108612 s, 96.5 MB/s 00:06:30.411 06:45:34 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:30.411 06:45:34 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:30.411 256+0 records in 00:06:30.411 256+0 records out 00:06:30.411 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0248029 s, 42.3 MB/s 00:06:30.411 06:45:34 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:30.411 06:45:34 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:30.411 256+0 records in 00:06:30.411 256+0 records out 00:06:30.411 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0273285 s, 38.4 MB/s 00:06:30.411 06:45:34 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:30.411 06:45:34 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.411 06:45:34 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:30.411 06:45:34 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:30.411 06:45:34 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:30.411 06:45:34 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:30.411 06:45:34 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:30.411 06:45:34 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:30.411 06:45:34 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:30.411 06:45:34 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:30.411 06:45:34 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:30.411 06:45:34 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:30.411 06:45:34 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:30.411 06:45:34 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.411 06:45:34 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.411 06:45:34 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:30.411 06:45:34 -- bdev/nbd_common.sh@51 -- # local i 00:06:30.411 06:45:34 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:30.411 06:45:34 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:30.670 06:45:35 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:30.670 06:45:35 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:30.670 06:45:35 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:30.670 06:45:35 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:30.670 06:45:35 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:30.670 06:45:35 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:30.670 06:45:35 -- bdev/nbd_common.sh@41 -- # break 00:06:30.670 06:45:35 -- bdev/nbd_common.sh@45 -- # return 0 00:06:30.670 06:45:35 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:30.670 06:45:35 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:30.927 06:45:35 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:30.927 06:45:35 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:30.927 06:45:35 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:30.927 06:45:35 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:30.927 06:45:35 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:30.927 06:45:35 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:30.927 06:45:35 -- bdev/nbd_common.sh@41 -- # break 00:06:30.927 06:45:35 -- bdev/nbd_common.sh@45 -- # return 0 00:06:30.927 06:45:35 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:30.927 06:45:35 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.927 06:45:35 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:31.186 06:45:35 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:31.186 06:45:35 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:31.186 06:45:35 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:31.186 06:45:35 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:31.186 06:45:35 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:31.186 06:45:35 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:31.186 06:45:35 -- bdev/nbd_common.sh@65 -- # true 00:06:31.186 06:45:35 -- bdev/nbd_common.sh@65 -- # count=0 00:06:31.186 06:45:35 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:31.186 06:45:35 -- bdev/nbd_common.sh@104 -- # count=0 00:06:31.186 06:45:35 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:31.186 06:45:35 -- bdev/nbd_common.sh@109 -- # return 0 00:06:31.186 06:45:35 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:31.445 06:45:35 -- event/event.sh@35 -- # sleep 3 00:06:31.704 [2024-12-13 06:45:35.979111] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:31.704 [2024-12-13 06:45:36.010980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.704 [2024-12-13 06:45:36.010990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.704 [2024-12-13 06:45:36.039710] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:31.704 [2024-12-13 06:45:36.039775] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:34.993 spdk_app_start Round 2 00:06:34.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:34.993 06:45:38 -- event/event.sh@23 -- # for i in {0..2} 00:06:34.993 06:45:38 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:34.993 06:45:38 -- event/event.sh@25 -- # waitforlisten 67019 /var/tmp/spdk-nbd.sock 00:06:34.993 06:45:38 -- common/autotest_common.sh@829 -- # '[' -z 67019 ']' 00:06:34.993 06:45:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:34.993 06:45:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:34.993 06:45:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:34.993 06:45:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:34.993 06:45:38 -- common/autotest_common.sh@10 -- # set +x 00:06:34.993 06:45:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:34.993 06:45:39 -- common/autotest_common.sh@862 -- # return 0 00:06:34.993 06:45:39 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:34.993 Malloc0 00:06:34.993 06:45:39 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:35.252 Malloc1 00:06:35.252 06:45:39 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:35.252 06:45:39 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.252 06:45:39 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:35.252 06:45:39 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:35.252 06:45:39 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.252 06:45:39 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:35.252 06:45:39 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:35.252 06:45:39 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.252 06:45:39 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:35.252 06:45:39 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:35.252 06:45:39 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.252 06:45:39 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:35.252 06:45:39 -- bdev/nbd_common.sh@12 -- # local i 00:06:35.252 06:45:39 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:35.252 06:45:39 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:35.252 06:45:39 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:35.511 /dev/nbd0 00:06:35.511 06:45:39 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:35.511 06:45:39 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:35.511 06:45:39 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:35.511 06:45:39 -- common/autotest_common.sh@867 -- # local i 00:06:35.511 06:45:39 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:35.511 06:45:39 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:35.511 06:45:39 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:35.511 06:45:39 -- common/autotest_common.sh@871 -- # break 00:06:35.511 06:45:39 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:35.511 06:45:39 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:35.511 06:45:39 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:35.511 1+0 records in 00:06:35.511 1+0 records out 00:06:35.511 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000554672 s, 7.4 MB/s 00:06:35.511 06:45:39 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:35.511 06:45:39 -- common/autotest_common.sh@884 -- # size=4096 00:06:35.511 06:45:39 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:35.511 06:45:39 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:35.511 06:45:39 -- common/autotest_common.sh@887 -- # return 0 00:06:35.511 06:45:39 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:35.511 06:45:39 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:35.511 06:45:39 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:35.770 /dev/nbd1 00:06:35.770 06:45:40 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:35.770 06:45:40 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:35.770 06:45:40 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:35.770 06:45:40 -- common/autotest_common.sh@867 -- # local i 00:06:35.770 06:45:40 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:35.770 06:45:40 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:35.770 06:45:40 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:35.770 06:45:40 -- common/autotest_common.sh@871 -- # break 00:06:35.770 06:45:40 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:35.770 06:45:40 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:35.770 06:45:40 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:35.770 1+0 records in 00:06:35.770 1+0 records out 00:06:35.770 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000615511 s, 6.7 MB/s 00:06:35.770 06:45:40 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:35.770 06:45:40 -- common/autotest_common.sh@884 -- # size=4096 00:06:35.770 06:45:40 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:35.770 06:45:40 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:35.770 06:45:40 -- common/autotest_common.sh@887 -- # return 0 00:06:35.770 06:45:40 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:35.770 06:45:40 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:35.770 06:45:40 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:35.770 06:45:40 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.770 06:45:40 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:36.029 06:45:40 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:36.029 { 00:06:36.029 "nbd_device": "/dev/nbd0", 00:06:36.029 "bdev_name": "Malloc0" 00:06:36.029 }, 00:06:36.029 { 00:06:36.029 "nbd_device": "/dev/nbd1", 00:06:36.029 "bdev_name": "Malloc1" 00:06:36.029 } 00:06:36.029 ]' 00:06:36.029 06:45:40 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:36.029 { 00:06:36.029 "nbd_device": "/dev/nbd0", 00:06:36.029 "bdev_name": "Malloc0" 00:06:36.029 }, 00:06:36.029 { 00:06:36.029 "nbd_device": "/dev/nbd1", 00:06:36.029 "bdev_name": "Malloc1" 00:06:36.029 } 00:06:36.029 ]' 00:06:36.029 06:45:40 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:36.288 06:45:40 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:36.288 /dev/nbd1' 00:06:36.288 06:45:40 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:36.288 /dev/nbd1' 00:06:36.288 06:45:40 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:36.288 06:45:40 -- bdev/nbd_common.sh@65 -- # count=2 00:06:36.288 06:45:40 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:36.288 06:45:40 -- bdev/nbd_common.sh@95 -- # count=2 00:06:36.288 06:45:40 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:36.288 06:45:40 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:36.288 06:45:40 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.288 06:45:40 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:36.288 06:45:40 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:36.288 06:45:40 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:36.288 06:45:40 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:36.288 06:45:40 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:36.288 256+0 records in 00:06:36.288 256+0 records out 00:06:36.288 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105088 s, 99.8 MB/s 00:06:36.288 06:45:40 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:36.288 06:45:40 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:36.288 256+0 records in 00:06:36.288 256+0 records out 00:06:36.288 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0268009 s, 39.1 MB/s 00:06:36.288 06:45:40 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:36.288 06:45:40 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:36.288 256+0 records in 00:06:36.288 256+0 records out 00:06:36.288 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0281413 s, 37.3 MB/s 00:06:36.288 06:45:40 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:36.288 06:45:40 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.288 06:45:40 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:36.288 06:45:40 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:36.288 06:45:40 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:36.288 06:45:40 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:36.288 06:45:40 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:36.288 06:45:40 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:36.288 06:45:40 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:36.288 06:45:40 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:36.288 06:45:40 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:36.288 06:45:40 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:36.288 06:45:40 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:36.288 06:45:40 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.289 06:45:40 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.289 06:45:40 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:36.289 06:45:40 -- bdev/nbd_common.sh@51 -- # local i 00:06:36.289 06:45:40 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:36.289 06:45:40 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:36.548 06:45:40 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:36.548 06:45:40 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:36.548 06:45:40 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:36.548 06:45:40 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:36.548 06:45:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:36.548 06:45:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:36.548 06:45:40 -- bdev/nbd_common.sh@41 -- # break 00:06:36.548 06:45:40 -- bdev/nbd_common.sh@45 -- # return 0 00:06:36.548 06:45:40 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:36.548 06:45:40 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:36.807 06:45:41 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:36.807 06:45:41 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:36.807 06:45:41 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:36.807 06:45:41 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:36.807 06:45:41 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:36.807 06:45:41 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:36.807 06:45:41 -- bdev/nbd_common.sh@41 -- # break 00:06:36.807 06:45:41 -- bdev/nbd_common.sh@45 -- # return 0 00:06:36.807 06:45:41 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:36.807 06:45:41 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.807 06:45:41 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:37.066 06:45:41 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:37.066 06:45:41 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:37.066 06:45:41 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:37.066 06:45:41 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:37.066 06:45:41 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:37.066 06:45:41 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:37.066 06:45:41 -- bdev/nbd_common.sh@65 -- # true 00:06:37.066 06:45:41 -- bdev/nbd_common.sh@65 -- # count=0 00:06:37.066 06:45:41 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:37.066 06:45:41 -- bdev/nbd_common.sh@104 -- # count=0 00:06:37.066 06:45:41 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:37.066 06:45:41 -- bdev/nbd_common.sh@109 -- # return 0 00:06:37.066 06:45:41 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:37.325 06:45:41 -- event/event.sh@35 -- # sleep 3 00:06:37.325 [2024-12-13 06:45:41.837597] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:37.584 [2024-12-13 06:45:41.869408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.584 [2024-12-13 06:45:41.869419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.584 [2024-12-13 06:45:41.897940] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:37.584 [2024-12-13 06:45:41.897996] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:40.872 06:45:44 -- event/event.sh@38 -- # waitforlisten 67019 /var/tmp/spdk-nbd.sock 00:06:40.872 06:45:44 -- common/autotest_common.sh@829 -- # '[' -z 67019 ']' 00:06:40.872 06:45:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:40.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:40.872 06:45:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:40.872 06:45:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:40.872 06:45:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:40.872 06:45:44 -- common/autotest_common.sh@10 -- # set +x 00:06:40.872 06:45:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:40.872 06:45:45 -- common/autotest_common.sh@862 -- # return 0 00:06:40.872 06:45:45 -- event/event.sh@39 -- # killprocess 67019 00:06:40.872 06:45:45 -- common/autotest_common.sh@936 -- # '[' -z 67019 ']' 00:06:40.872 06:45:45 -- common/autotest_common.sh@940 -- # kill -0 67019 00:06:40.872 06:45:45 -- common/autotest_common.sh@941 -- # uname 00:06:40.872 06:45:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:40.872 06:45:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67019 00:06:40.872 killing process with pid 67019 00:06:40.872 06:45:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:40.872 06:45:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:40.872 06:45:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67019' 00:06:40.872 06:45:45 -- common/autotest_common.sh@955 -- # kill 67019 00:06:40.872 06:45:45 -- common/autotest_common.sh@960 -- # wait 67019 00:06:40.872 spdk_app_start is called in Round 0. 00:06:40.872 Shutdown signal received, stop current app iteration 00:06:40.872 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:06:40.872 spdk_app_start is called in Round 1. 00:06:40.872 Shutdown signal received, stop current app iteration 00:06:40.872 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:06:40.872 spdk_app_start is called in Round 2. 00:06:40.872 Shutdown signal received, stop current app iteration 00:06:40.872 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:06:40.872 spdk_app_start is called in Round 3. 00:06:40.872 Shutdown signal received, stop current app iteration 00:06:40.872 ************************************ 00:06:40.872 END TEST app_repeat 00:06:40.872 ************************************ 00:06:40.872 06:45:45 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:40.872 06:45:45 -- event/event.sh@42 -- # return 0 00:06:40.872 00:06:40.872 real 0m18.030s 00:06:40.872 user 0m41.218s 00:06:40.872 sys 0m2.386s 00:06:40.872 06:45:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:40.872 06:45:45 -- common/autotest_common.sh@10 -- # set +x 00:06:40.872 06:45:45 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:40.872 06:45:45 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:40.872 06:45:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:40.872 06:45:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:40.872 06:45:45 -- common/autotest_common.sh@10 -- # set +x 00:06:40.872 ************************************ 00:06:40.872 START TEST cpu_locks 00:06:40.872 ************************************ 00:06:40.872 06:45:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:40.872 * Looking for test storage... 00:06:40.872 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:40.872 06:45:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:40.872 06:45:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:40.872 06:45:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:40.872 06:45:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:40.872 06:45:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:40.872 06:45:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:40.872 06:45:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:40.872 06:45:45 -- scripts/common.sh@335 -- # IFS=.-: 00:06:40.872 06:45:45 -- scripts/common.sh@335 -- # read -ra ver1 00:06:40.872 06:45:45 -- scripts/common.sh@336 -- # IFS=.-: 00:06:40.872 06:45:45 -- scripts/common.sh@336 -- # read -ra ver2 00:06:40.872 06:45:45 -- scripts/common.sh@337 -- # local 'op=<' 00:06:40.872 06:45:45 -- scripts/common.sh@339 -- # ver1_l=2 00:06:40.872 06:45:45 -- scripts/common.sh@340 -- # ver2_l=1 00:06:40.872 06:45:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:40.872 06:45:45 -- scripts/common.sh@343 -- # case "$op" in 00:06:40.872 06:45:45 -- scripts/common.sh@344 -- # : 1 00:06:40.872 06:45:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:40.872 06:45:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:40.872 06:45:45 -- scripts/common.sh@364 -- # decimal 1 00:06:40.872 06:45:45 -- scripts/common.sh@352 -- # local d=1 00:06:40.872 06:45:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:40.872 06:45:45 -- scripts/common.sh@354 -- # echo 1 00:06:40.872 06:45:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:41.131 06:45:45 -- scripts/common.sh@365 -- # decimal 2 00:06:41.131 06:45:45 -- scripts/common.sh@352 -- # local d=2 00:06:41.131 06:45:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:41.131 06:45:45 -- scripts/common.sh@354 -- # echo 2 00:06:41.131 06:45:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:41.131 06:45:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:41.131 06:45:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:41.131 06:45:45 -- scripts/common.sh@367 -- # return 0 00:06:41.131 06:45:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:41.131 06:45:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:41.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.131 --rc genhtml_branch_coverage=1 00:06:41.131 --rc genhtml_function_coverage=1 00:06:41.131 --rc genhtml_legend=1 00:06:41.131 --rc geninfo_all_blocks=1 00:06:41.131 --rc geninfo_unexecuted_blocks=1 00:06:41.131 00:06:41.131 ' 00:06:41.131 06:45:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:41.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.131 --rc genhtml_branch_coverage=1 00:06:41.131 --rc genhtml_function_coverage=1 00:06:41.131 --rc genhtml_legend=1 00:06:41.131 --rc geninfo_all_blocks=1 00:06:41.131 --rc geninfo_unexecuted_blocks=1 00:06:41.131 00:06:41.131 ' 00:06:41.131 06:45:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:41.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.131 --rc genhtml_branch_coverage=1 00:06:41.131 --rc genhtml_function_coverage=1 00:06:41.131 --rc genhtml_legend=1 00:06:41.131 --rc geninfo_all_blocks=1 00:06:41.131 --rc geninfo_unexecuted_blocks=1 00:06:41.131 00:06:41.131 ' 00:06:41.131 06:45:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:41.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.131 --rc genhtml_branch_coverage=1 00:06:41.131 --rc genhtml_function_coverage=1 00:06:41.131 --rc genhtml_legend=1 00:06:41.131 --rc geninfo_all_blocks=1 00:06:41.131 --rc geninfo_unexecuted_blocks=1 00:06:41.131 00:06:41.131 ' 00:06:41.131 06:45:45 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:41.131 06:45:45 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:41.131 06:45:45 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:41.131 06:45:45 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:41.131 06:45:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:41.131 06:45:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:41.131 06:45:45 -- common/autotest_common.sh@10 -- # set +x 00:06:41.131 ************************************ 00:06:41.131 START TEST default_locks 00:06:41.131 ************************************ 00:06:41.131 06:45:45 -- common/autotest_common.sh@1114 -- # default_locks 00:06:41.131 06:45:45 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=67446 00:06:41.131 06:45:45 -- event/cpu_locks.sh@47 -- # waitforlisten 67446 00:06:41.131 06:45:45 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:41.131 06:45:45 -- common/autotest_common.sh@829 -- # '[' -z 67446 ']' 00:06:41.131 06:45:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.131 06:45:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:41.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.131 06:45:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.131 06:45:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:41.131 06:45:45 -- common/autotest_common.sh@10 -- # set +x 00:06:41.132 [2024-12-13 06:45:45.457819] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:41.132 [2024-12-13 06:45:45.458070] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67446 ] 00:06:41.132 [2024-12-13 06:45:45.589248] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.132 [2024-12-13 06:45:45.623001] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:41.132 [2024-12-13 06:45:45.623138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.069 06:45:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:42.069 06:45:46 -- common/autotest_common.sh@862 -- # return 0 00:06:42.069 06:45:46 -- event/cpu_locks.sh@49 -- # locks_exist 67446 00:06:42.069 06:45:46 -- event/cpu_locks.sh@22 -- # lslocks -p 67446 00:06:42.069 06:45:46 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:42.328 06:45:46 -- event/cpu_locks.sh@50 -- # killprocess 67446 00:06:42.328 06:45:46 -- common/autotest_common.sh@936 -- # '[' -z 67446 ']' 00:06:42.328 06:45:46 -- common/autotest_common.sh@940 -- # kill -0 67446 00:06:42.328 06:45:46 -- common/autotest_common.sh@941 -- # uname 00:06:42.328 06:45:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:42.328 06:45:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67446 00:06:42.328 killing process with pid 67446 00:06:42.328 06:45:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:42.328 06:45:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:42.328 06:45:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67446' 00:06:42.328 06:45:46 -- common/autotest_common.sh@955 -- # kill 67446 00:06:42.328 06:45:46 -- common/autotest_common.sh@960 -- # wait 67446 00:06:42.588 06:45:46 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 67446 00:06:42.588 06:45:46 -- common/autotest_common.sh@650 -- # local es=0 00:06:42.588 06:45:46 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 67446 00:06:42.588 06:45:46 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:42.588 06:45:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:42.588 06:45:46 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:42.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.588 ERROR: process (pid: 67446) is no longer running 00:06:42.588 06:45:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:42.588 06:45:46 -- common/autotest_common.sh@653 -- # waitforlisten 67446 00:06:42.588 06:45:46 -- common/autotest_common.sh@829 -- # '[' -z 67446 ']' 00:06:42.588 06:45:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.588 06:45:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:42.588 06:45:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.588 06:45:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:42.588 06:45:46 -- common/autotest_common.sh@10 -- # set +x 00:06:42.588 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (67446) - No such process 00:06:42.588 06:45:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:42.588 06:45:46 -- common/autotest_common.sh@862 -- # return 1 00:06:42.588 06:45:46 -- common/autotest_common.sh@653 -- # es=1 00:06:42.588 06:45:46 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:42.588 06:45:46 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:42.588 06:45:46 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:42.588 06:45:46 -- event/cpu_locks.sh@54 -- # no_locks 00:06:42.588 06:45:46 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:42.588 06:45:46 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:42.588 06:45:46 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:42.588 00:06:42.588 real 0m1.513s 00:06:42.588 user 0m1.703s 00:06:42.588 sys 0m0.368s 00:06:42.588 06:45:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:42.588 06:45:46 -- common/autotest_common.sh@10 -- # set +x 00:06:42.588 ************************************ 00:06:42.588 END TEST default_locks 00:06:42.588 ************************************ 00:06:42.588 06:45:46 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:42.588 06:45:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:42.588 06:45:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:42.588 06:45:46 -- common/autotest_common.sh@10 -- # set +x 00:06:42.588 ************************************ 00:06:42.588 START TEST default_locks_via_rpc 00:06:42.588 ************************************ 00:06:42.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.588 06:45:46 -- common/autotest_common.sh@1114 -- # default_locks_via_rpc 00:06:42.588 06:45:46 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=67492 00:06:42.588 06:45:46 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:42.588 06:45:46 -- event/cpu_locks.sh@63 -- # waitforlisten 67492 00:06:42.588 06:45:46 -- common/autotest_common.sh@829 -- # '[' -z 67492 ']' 00:06:42.588 06:45:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.588 06:45:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:42.588 06:45:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.588 06:45:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:42.588 06:45:46 -- common/autotest_common.sh@10 -- # set +x 00:06:42.588 [2024-12-13 06:45:47.027455] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:42.588 [2024-12-13 06:45:47.027747] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67492 ] 00:06:42.847 [2024-12-13 06:45:47.162820] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.847 [2024-12-13 06:45:47.198244] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:42.847 [2024-12-13 06:45:47.198718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.785 06:45:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:43.785 06:45:47 -- common/autotest_common.sh@862 -- # return 0 00:06:43.785 06:45:47 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:43.785 06:45:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.785 06:45:47 -- common/autotest_common.sh@10 -- # set +x 00:06:43.785 06:45:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.785 06:45:48 -- event/cpu_locks.sh@67 -- # no_locks 00:06:43.785 06:45:48 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:43.785 06:45:48 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:43.785 06:45:48 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:43.785 06:45:48 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:43.785 06:45:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.785 06:45:48 -- common/autotest_common.sh@10 -- # set +x 00:06:43.785 06:45:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.785 06:45:48 -- event/cpu_locks.sh@71 -- # locks_exist 67492 00:06:43.785 06:45:48 -- event/cpu_locks.sh@22 -- # lslocks -p 67492 00:06:43.785 06:45:48 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:44.044 06:45:48 -- event/cpu_locks.sh@73 -- # killprocess 67492 00:06:44.044 06:45:48 -- common/autotest_common.sh@936 -- # '[' -z 67492 ']' 00:06:44.044 06:45:48 -- common/autotest_common.sh@940 -- # kill -0 67492 00:06:44.044 06:45:48 -- common/autotest_common.sh@941 -- # uname 00:06:44.044 06:45:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:44.044 06:45:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67492 00:06:44.044 killing process with pid 67492 00:06:44.044 06:45:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:44.044 06:45:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:44.044 06:45:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67492' 00:06:44.044 06:45:48 -- common/autotest_common.sh@955 -- # kill 67492 00:06:44.044 06:45:48 -- common/autotest_common.sh@960 -- # wait 67492 00:06:44.044 00:06:44.044 real 0m1.577s 00:06:44.044 user 0m1.811s 00:06:44.044 sys 0m0.386s 00:06:44.044 06:45:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:44.044 ************************************ 00:06:44.044 END TEST default_locks_via_rpc 00:06:44.044 ************************************ 00:06:44.044 06:45:48 -- common/autotest_common.sh@10 -- # set +x 00:06:44.303 06:45:48 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:44.303 06:45:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:44.303 06:45:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:44.303 06:45:48 -- common/autotest_common.sh@10 -- # set +x 00:06:44.303 ************************************ 00:06:44.304 START TEST non_locking_app_on_locked_coremask 00:06:44.304 ************************************ 00:06:44.304 06:45:48 -- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask 00:06:44.304 06:45:48 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=67538 00:06:44.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.304 06:45:48 -- event/cpu_locks.sh@81 -- # waitforlisten 67538 /var/tmp/spdk.sock 00:06:44.304 06:45:48 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:44.304 06:45:48 -- common/autotest_common.sh@829 -- # '[' -z 67538 ']' 00:06:44.304 06:45:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.304 06:45:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:44.304 06:45:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.304 06:45:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:44.304 06:45:48 -- common/autotest_common.sh@10 -- # set +x 00:06:44.304 [2024-12-13 06:45:48.655950] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:44.304 [2024-12-13 06:45:48.656049] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67538 ] 00:06:44.304 [2024-12-13 06:45:48.795951] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.563 [2024-12-13 06:45:48.828287] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:44.563 [2024-12-13 06:45:48.828530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:45.131 06:45:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:45.131 06:45:49 -- common/autotest_common.sh@862 -- # return 0 00:06:45.131 06:45:49 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:45.131 06:45:49 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=67554 00:06:45.131 06:45:49 -- event/cpu_locks.sh@85 -- # waitforlisten 67554 /var/tmp/spdk2.sock 00:06:45.131 06:45:49 -- common/autotest_common.sh@829 -- # '[' -z 67554 ']' 00:06:45.131 06:45:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:45.131 06:45:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:45.131 06:45:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:45.131 06:45:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:45.131 06:45:49 -- common/autotest_common.sh@10 -- # set +x 00:06:45.390 [2024-12-13 06:45:49.664937] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:45.390 [2024-12-13 06:45:49.665639] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67554 ] 00:06:45.390 [2024-12-13 06:45:49.798862] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:45.390 [2024-12-13 06:45:49.798912] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.390 [2024-12-13 06:45:49.861791] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:45.390 [2024-12-13 06:45:49.861947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.327 06:45:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:46.327 06:45:50 -- common/autotest_common.sh@862 -- # return 0 00:06:46.327 06:45:50 -- event/cpu_locks.sh@87 -- # locks_exist 67538 00:06:46.327 06:45:50 -- event/cpu_locks.sh@22 -- # lslocks -p 67538 00:06:46.327 06:45:50 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:46.914 06:45:51 -- event/cpu_locks.sh@89 -- # killprocess 67538 00:06:46.914 06:45:51 -- common/autotest_common.sh@936 -- # '[' -z 67538 ']' 00:06:46.914 06:45:51 -- common/autotest_common.sh@940 -- # kill -0 67538 00:06:46.914 06:45:51 -- common/autotest_common.sh@941 -- # uname 00:06:46.914 06:45:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:46.914 06:45:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67538 00:06:46.914 killing process with pid 67538 00:06:46.914 06:45:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:46.914 06:45:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:46.914 06:45:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67538' 00:06:46.914 06:45:51 -- common/autotest_common.sh@955 -- # kill 67538 00:06:46.914 06:45:51 -- common/autotest_common.sh@960 -- # wait 67538 00:06:47.210 06:45:51 -- event/cpu_locks.sh@90 -- # killprocess 67554 00:06:47.210 06:45:51 -- common/autotest_common.sh@936 -- # '[' -z 67554 ']' 00:06:47.210 06:45:51 -- common/autotest_common.sh@940 -- # kill -0 67554 00:06:47.210 06:45:51 -- common/autotest_common.sh@941 -- # uname 00:06:47.210 06:45:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:47.210 06:45:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67554 00:06:47.210 killing process with pid 67554 00:06:47.210 06:45:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:47.210 06:45:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:47.210 06:45:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67554' 00:06:47.210 06:45:51 -- common/autotest_common.sh@955 -- # kill 67554 00:06:47.210 06:45:51 -- common/autotest_common.sh@960 -- # wait 67554 00:06:47.478 00:06:47.478 real 0m3.295s 00:06:47.478 user 0m3.903s 00:06:47.478 sys 0m0.738s 00:06:47.478 ************************************ 00:06:47.478 END TEST non_locking_app_on_locked_coremask 00:06:47.478 ************************************ 00:06:47.478 06:45:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:47.478 06:45:51 -- common/autotest_common.sh@10 -- # set +x 00:06:47.478 06:45:51 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:47.478 06:45:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:47.478 06:45:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:47.478 06:45:51 -- common/autotest_common.sh@10 -- # set +x 00:06:47.478 ************************************ 00:06:47.478 START TEST locking_app_on_unlocked_coremask 00:06:47.478 ************************************ 00:06:47.478 06:45:51 -- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask 00:06:47.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.478 06:45:51 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=67615 00:06:47.478 06:45:51 -- event/cpu_locks.sh@99 -- # waitforlisten 67615 /var/tmp/spdk.sock 00:06:47.478 06:45:51 -- common/autotest_common.sh@829 -- # '[' -z 67615 ']' 00:06:47.478 06:45:51 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:47.478 06:45:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.478 06:45:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:47.478 06:45:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.478 06:45:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:47.478 06:45:51 -- common/autotest_common.sh@10 -- # set +x 00:06:47.738 [2024-12-13 06:45:52.008178] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:47.738 [2024-12-13 06:45:52.008503] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67615 ] 00:06:47.738 [2024-12-13 06:45:52.147112] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:47.738 [2024-12-13 06:45:52.147306] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.738 [2024-12-13 06:45:52.178364] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:47.738 [2024-12-13 06:45:52.178521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:48.675 06:45:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:48.675 06:45:52 -- common/autotest_common.sh@862 -- # return 0 00:06:48.675 06:45:52 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:48.675 06:45:52 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=67631 00:06:48.675 06:45:52 -- event/cpu_locks.sh@103 -- # waitforlisten 67631 /var/tmp/spdk2.sock 00:06:48.675 06:45:52 -- common/autotest_common.sh@829 -- # '[' -z 67631 ']' 00:06:48.675 06:45:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:48.675 06:45:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:48.675 06:45:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:48.675 06:45:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:48.675 06:45:52 -- common/autotest_common.sh@10 -- # set +x 00:06:48.675 [2024-12-13 06:45:53.013609] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:48.675 [2024-12-13 06:45:53.013897] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67631 ] 00:06:48.675 [2024-12-13 06:45:53.150578] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.934 [2024-12-13 06:45:53.218350] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:48.934 [2024-12-13 06:45:53.218538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.869 06:45:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:49.869 06:45:54 -- common/autotest_common.sh@862 -- # return 0 00:06:49.869 06:45:54 -- event/cpu_locks.sh@105 -- # locks_exist 67631 00:06:49.869 06:45:54 -- event/cpu_locks.sh@22 -- # lslocks -p 67631 00:06:49.869 06:45:54 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:50.437 06:45:54 -- event/cpu_locks.sh@107 -- # killprocess 67615 00:06:50.437 06:45:54 -- common/autotest_common.sh@936 -- # '[' -z 67615 ']' 00:06:50.437 06:45:54 -- common/autotest_common.sh@940 -- # kill -0 67615 00:06:50.437 06:45:54 -- common/autotest_common.sh@941 -- # uname 00:06:50.437 06:45:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:50.437 06:45:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67615 00:06:50.437 killing process with pid 67615 00:06:50.437 06:45:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:50.437 06:45:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:50.437 06:45:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67615' 00:06:50.437 06:45:54 -- common/autotest_common.sh@955 -- # kill 67615 00:06:50.437 06:45:54 -- common/autotest_common.sh@960 -- # wait 67615 00:06:51.013 06:45:55 -- event/cpu_locks.sh@108 -- # killprocess 67631 00:06:51.013 06:45:55 -- common/autotest_common.sh@936 -- # '[' -z 67631 ']' 00:06:51.013 06:45:55 -- common/autotest_common.sh@940 -- # kill -0 67631 00:06:51.013 06:45:55 -- common/autotest_common.sh@941 -- # uname 00:06:51.013 06:45:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:51.013 06:45:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67631 00:06:51.013 killing process with pid 67631 00:06:51.013 06:45:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:51.013 06:45:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:51.013 06:45:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67631' 00:06:51.013 06:45:55 -- common/autotest_common.sh@955 -- # kill 67631 00:06:51.013 06:45:55 -- common/autotest_common.sh@960 -- # wait 67631 00:06:51.272 ************************************ 00:06:51.272 END TEST locking_app_on_unlocked_coremask 00:06:51.272 ************************************ 00:06:51.272 00:06:51.272 real 0m3.611s 00:06:51.272 user 0m4.290s 00:06:51.272 sys 0m0.894s 00:06:51.272 06:45:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:51.272 06:45:55 -- common/autotest_common.sh@10 -- # set +x 00:06:51.272 06:45:55 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:51.272 06:45:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:51.272 06:45:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:51.272 06:45:55 -- common/autotest_common.sh@10 -- # set +x 00:06:51.272 ************************************ 00:06:51.272 START TEST locking_app_on_locked_coremask 00:06:51.272 ************************************ 00:06:51.272 06:45:55 -- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask 00:06:51.272 06:45:55 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=67693 00:06:51.272 06:45:55 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:51.272 06:45:55 -- event/cpu_locks.sh@116 -- # waitforlisten 67693 /var/tmp/spdk.sock 00:06:51.272 06:45:55 -- common/autotest_common.sh@829 -- # '[' -z 67693 ']' 00:06:51.272 06:45:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.272 06:45:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:51.272 06:45:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.272 06:45:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:51.273 06:45:55 -- common/autotest_common.sh@10 -- # set +x 00:06:51.273 [2024-12-13 06:45:55.669446] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:51.273 [2024-12-13 06:45:55.669548] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67693 ] 00:06:51.531 [2024-12-13 06:45:55.807710] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.531 [2024-12-13 06:45:55.839119] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:51.531 [2024-12-13 06:45:55.839280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.467 06:45:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:52.467 06:45:56 -- common/autotest_common.sh@862 -- # return 0 00:06:52.467 06:45:56 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=67709 00:06:52.467 06:45:56 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 67709 /var/tmp/spdk2.sock 00:06:52.467 06:45:56 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:52.467 06:45:56 -- common/autotest_common.sh@650 -- # local es=0 00:06:52.467 06:45:56 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 67709 /var/tmp/spdk2.sock 00:06:52.467 06:45:56 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:52.467 06:45:56 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:52.467 06:45:56 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:52.467 06:45:56 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:52.467 06:45:56 -- common/autotest_common.sh@653 -- # waitforlisten 67709 /var/tmp/spdk2.sock 00:06:52.467 06:45:56 -- common/autotest_common.sh@829 -- # '[' -z 67709 ']' 00:06:52.467 06:45:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:52.467 06:45:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:52.467 06:45:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:52.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:52.467 06:45:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:52.467 06:45:56 -- common/autotest_common.sh@10 -- # set +x 00:06:52.467 [2024-12-13 06:45:56.724744] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:52.468 [2024-12-13 06:45:56.725038] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67709 ] 00:06:52.468 [2024-12-13 06:45:56.866409] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 67693 has claimed it. 00:06:52.468 [2024-12-13 06:45:56.866472] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:53.034 ERROR: process (pid: 67709) is no longer running 00:06:53.034 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (67709) - No such process 00:06:53.034 06:45:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:53.034 06:45:57 -- common/autotest_common.sh@862 -- # return 1 00:06:53.034 06:45:57 -- common/autotest_common.sh@653 -- # es=1 00:06:53.034 06:45:57 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:53.034 06:45:57 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:53.034 06:45:57 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:53.034 06:45:57 -- event/cpu_locks.sh@122 -- # locks_exist 67693 00:06:53.034 06:45:57 -- event/cpu_locks.sh@22 -- # lslocks -p 67693 00:06:53.034 06:45:57 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:53.293 06:45:57 -- event/cpu_locks.sh@124 -- # killprocess 67693 00:06:53.293 06:45:57 -- common/autotest_common.sh@936 -- # '[' -z 67693 ']' 00:06:53.293 06:45:57 -- common/autotest_common.sh@940 -- # kill -0 67693 00:06:53.293 06:45:57 -- common/autotest_common.sh@941 -- # uname 00:06:53.293 06:45:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:53.293 06:45:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67693 00:06:53.293 killing process with pid 67693 00:06:53.293 06:45:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:53.293 06:45:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:53.293 06:45:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67693' 00:06:53.293 06:45:57 -- common/autotest_common.sh@955 -- # kill 67693 00:06:53.293 06:45:57 -- common/autotest_common.sh@960 -- # wait 67693 00:06:53.552 00:06:53.552 real 0m2.327s 00:06:53.552 user 0m2.853s 00:06:53.552 sys 0m0.450s 00:06:53.552 06:45:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:53.552 ************************************ 00:06:53.552 END TEST locking_app_on_locked_coremask 00:06:53.552 ************************************ 00:06:53.552 06:45:57 -- common/autotest_common.sh@10 -- # set +x 00:06:53.552 06:45:57 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:53.552 06:45:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:53.552 06:45:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:53.552 06:45:57 -- common/autotest_common.sh@10 -- # set +x 00:06:53.552 ************************************ 00:06:53.552 START TEST locking_overlapped_coremask 00:06:53.552 ************************************ 00:06:53.552 06:45:57 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask 00:06:53.552 06:45:57 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=67749 00:06:53.552 06:45:57 -- event/cpu_locks.sh@133 -- # waitforlisten 67749 /var/tmp/spdk.sock 00:06:53.552 06:45:57 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:53.552 06:45:57 -- common/autotest_common.sh@829 -- # '[' -z 67749 ']' 00:06:53.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.552 06:45:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.552 06:45:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:53.552 06:45:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.552 06:45:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:53.552 06:45:57 -- common/autotest_common.sh@10 -- # set +x 00:06:53.552 [2024-12-13 06:45:58.040221] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:53.552 [2024-12-13 06:45:58.040508] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67749 ] 00:06:53.811 [2024-12-13 06:45:58.172755] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:53.811 [2024-12-13 06:45:58.205285] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:53.811 [2024-12-13 06:45:58.205887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.811 [2024-12-13 06:45:58.205929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:53.811 [2024-12-13 06:45:58.205931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.746 06:45:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:54.746 06:45:59 -- common/autotest_common.sh@862 -- # return 0 00:06:54.746 06:45:59 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=67767 00:06:54.746 06:45:59 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:54.746 06:45:59 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 67767 /var/tmp/spdk2.sock 00:06:54.746 06:45:59 -- common/autotest_common.sh@650 -- # local es=0 00:06:54.746 06:45:59 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 67767 /var/tmp/spdk2.sock 00:06:54.746 06:45:59 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:54.746 06:45:59 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:54.746 06:45:59 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:54.746 06:45:59 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:54.746 06:45:59 -- common/autotest_common.sh@653 -- # waitforlisten 67767 /var/tmp/spdk2.sock 00:06:54.746 06:45:59 -- common/autotest_common.sh@829 -- # '[' -z 67767 ']' 00:06:54.746 06:45:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:54.746 06:45:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:54.746 06:45:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:54.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:54.746 06:45:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:54.746 06:45:59 -- common/autotest_common.sh@10 -- # set +x 00:06:54.746 [2024-12-13 06:45:59.101607] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:54.746 [2024-12-13 06:45:59.101727] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67767 ] 00:06:54.746 [2024-12-13 06:45:59.246130] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 67749 has claimed it. 00:06:54.746 [2024-12-13 06:45:59.246218] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:55.313 ERROR: process (pid: 67767) is no longer running 00:06:55.313 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (67767) - No such process 00:06:55.313 06:45:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:55.313 06:45:59 -- common/autotest_common.sh@862 -- # return 1 00:06:55.313 06:45:59 -- common/autotest_common.sh@653 -- # es=1 00:06:55.313 06:45:59 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:55.313 06:45:59 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:55.313 06:45:59 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:55.313 06:45:59 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:55.313 06:45:59 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:55.313 06:45:59 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:55.313 06:45:59 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:55.313 06:45:59 -- event/cpu_locks.sh@141 -- # killprocess 67749 00:06:55.313 06:45:59 -- common/autotest_common.sh@936 -- # '[' -z 67749 ']' 00:06:55.313 06:45:59 -- common/autotest_common.sh@940 -- # kill -0 67749 00:06:55.313 06:45:59 -- common/autotest_common.sh@941 -- # uname 00:06:55.313 06:45:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:55.313 06:45:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67749 00:06:55.313 killing process with pid 67749 00:06:55.313 06:45:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:55.313 06:45:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:55.313 06:45:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67749' 00:06:55.313 06:45:59 -- common/autotest_common.sh@955 -- # kill 67749 00:06:55.313 06:45:59 -- common/autotest_common.sh@960 -- # wait 67749 00:06:55.572 00:06:55.572 real 0m2.048s 00:06:55.572 user 0m5.990s 00:06:55.572 sys 0m0.328s 00:06:55.572 06:46:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:55.572 06:46:00 -- common/autotest_common.sh@10 -- # set +x 00:06:55.572 ************************************ 00:06:55.572 END TEST locking_overlapped_coremask 00:06:55.572 ************************************ 00:06:55.572 06:46:00 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:55.572 06:46:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:55.572 06:46:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:55.572 06:46:00 -- common/autotest_common.sh@10 -- # set +x 00:06:55.831 ************************************ 00:06:55.831 START TEST locking_overlapped_coremask_via_rpc 00:06:55.831 ************************************ 00:06:55.831 06:46:00 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc 00:06:55.831 06:46:00 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=67807 00:06:55.831 06:46:00 -- event/cpu_locks.sh@149 -- # waitforlisten 67807 /var/tmp/spdk.sock 00:06:55.831 06:46:00 -- common/autotest_common.sh@829 -- # '[' -z 67807 ']' 00:06:55.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.831 06:46:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.831 06:46:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:55.831 06:46:00 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:55.831 06:46:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.831 06:46:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:55.831 06:46:00 -- common/autotest_common.sh@10 -- # set +x 00:06:55.831 [2024-12-13 06:46:00.139013] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:55.831 [2024-12-13 06:46:00.139445] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67807 ] 00:06:55.831 [2024-12-13 06:46:00.272818] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:55.832 [2024-12-13 06:46:00.273019] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:55.832 [2024-12-13 06:46:00.305538] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:55.832 [2024-12-13 06:46:00.306072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:55.832 [2024-12-13 06:46:00.306240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:55.832 [2024-12-13 06:46:00.306245] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:56.768 06:46:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:56.768 06:46:01 -- common/autotest_common.sh@862 -- # return 0 00:06:56.768 06:46:01 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=67825 00:06:56.768 06:46:01 -- event/cpu_locks.sh@153 -- # waitforlisten 67825 /var/tmp/spdk2.sock 00:06:56.768 06:46:01 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:56.768 06:46:01 -- common/autotest_common.sh@829 -- # '[' -z 67825 ']' 00:06:56.768 06:46:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:56.768 06:46:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:56.768 06:46:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:56.768 06:46:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:56.768 06:46:01 -- common/autotest_common.sh@10 -- # set +x 00:06:56.768 [2024-12-13 06:46:01.124866] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:56.768 [2024-12-13 06:46:01.125165] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67825 ] 00:06:56.768 [2024-12-13 06:46:01.267506] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:56.768 [2024-12-13 06:46:01.267562] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:57.026 [2024-12-13 06:46:01.340737] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:57.026 [2024-12-13 06:46:01.341020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:57.026 [2024-12-13 06:46:01.341166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:57.026 [2024-12-13 06:46:01.341167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:57.593 06:46:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:57.593 06:46:02 -- common/autotest_common.sh@862 -- # return 0 00:06:57.593 06:46:02 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:57.593 06:46:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.593 06:46:02 -- common/autotest_common.sh@10 -- # set +x 00:06:57.593 06:46:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.593 06:46:02 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:57.593 06:46:02 -- common/autotest_common.sh@650 -- # local es=0 00:06:57.593 06:46:02 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:57.593 06:46:02 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:57.593 06:46:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:57.593 06:46:02 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:57.593 06:46:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:57.593 06:46:02 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:57.593 06:46:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.593 06:46:02 -- common/autotest_common.sh@10 -- # set +x 00:06:57.593 [2024-12-13 06:46:02.089536] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 67807 has claimed it. 00:06:57.593 request: 00:06:57.593 { 00:06:57.593 "method": "framework_enable_cpumask_locks", 00:06:57.593 "req_id": 1 00:06:57.593 } 00:06:57.593 Got JSON-RPC error response 00:06:57.593 response: 00:06:57.593 { 00:06:57.593 "code": -32603, 00:06:57.593 "message": "Failed to claim CPU core: 2" 00:06:57.593 } 00:06:57.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.593 06:46:02 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:57.593 06:46:02 -- common/autotest_common.sh@653 -- # es=1 00:06:57.593 06:46:02 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:57.593 06:46:02 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:57.593 06:46:02 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:57.593 06:46:02 -- event/cpu_locks.sh@158 -- # waitforlisten 67807 /var/tmp/spdk.sock 00:06:57.593 06:46:02 -- common/autotest_common.sh@829 -- # '[' -z 67807 ']' 00:06:57.593 06:46:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.593 06:46:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:57.593 06:46:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.593 06:46:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:57.593 06:46:02 -- common/autotest_common.sh@10 -- # set +x 00:06:57.851 06:46:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:57.851 06:46:02 -- common/autotest_common.sh@862 -- # return 0 00:06:57.851 06:46:02 -- event/cpu_locks.sh@159 -- # waitforlisten 67825 /var/tmp/spdk2.sock 00:06:57.851 06:46:02 -- common/autotest_common.sh@829 -- # '[' -z 67825 ']' 00:06:57.851 06:46:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:57.851 06:46:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:57.851 06:46:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:57.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:57.851 06:46:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:57.851 06:46:02 -- common/autotest_common.sh@10 -- # set +x 00:06:58.110 06:46:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:58.110 06:46:02 -- common/autotest_common.sh@862 -- # return 0 00:06:58.110 06:46:02 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:58.110 06:46:02 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:58.110 ************************************ 00:06:58.110 END TEST locking_overlapped_coremask_via_rpc 00:06:58.110 ************************************ 00:06:58.110 06:46:02 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:58.110 06:46:02 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:58.110 00:06:58.110 real 0m2.524s 00:06:58.110 user 0m1.277s 00:06:58.110 sys 0m0.171s 00:06:58.110 06:46:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:58.110 06:46:02 -- common/autotest_common.sh@10 -- # set +x 00:06:58.369 06:46:02 -- event/cpu_locks.sh@174 -- # cleanup 00:06:58.370 06:46:02 -- event/cpu_locks.sh@15 -- # [[ -z 67807 ]] 00:06:58.370 06:46:02 -- event/cpu_locks.sh@15 -- # killprocess 67807 00:06:58.370 06:46:02 -- common/autotest_common.sh@936 -- # '[' -z 67807 ']' 00:06:58.370 06:46:02 -- common/autotest_common.sh@940 -- # kill -0 67807 00:06:58.370 06:46:02 -- common/autotest_common.sh@941 -- # uname 00:06:58.370 06:46:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:58.370 06:46:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67807 00:06:58.370 killing process with pid 67807 00:06:58.370 06:46:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:58.370 06:46:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:58.370 06:46:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67807' 00:06:58.370 06:46:02 -- common/autotest_common.sh@955 -- # kill 67807 00:06:58.370 06:46:02 -- common/autotest_common.sh@960 -- # wait 67807 00:06:58.629 06:46:02 -- event/cpu_locks.sh@16 -- # [[ -z 67825 ]] 00:06:58.629 06:46:02 -- event/cpu_locks.sh@16 -- # killprocess 67825 00:06:58.629 06:46:02 -- common/autotest_common.sh@936 -- # '[' -z 67825 ']' 00:06:58.629 06:46:02 -- common/autotest_common.sh@940 -- # kill -0 67825 00:06:58.629 06:46:02 -- common/autotest_common.sh@941 -- # uname 00:06:58.629 06:46:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:58.629 06:46:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67825 00:06:58.629 killing process with pid 67825 00:06:58.629 06:46:02 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:58.629 06:46:02 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:58.629 06:46:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67825' 00:06:58.629 06:46:02 -- common/autotest_common.sh@955 -- # kill 67825 00:06:58.629 06:46:02 -- common/autotest_common.sh@960 -- # wait 67825 00:06:58.888 06:46:03 -- event/cpu_locks.sh@18 -- # rm -f 00:06:58.888 06:46:03 -- event/cpu_locks.sh@1 -- # cleanup 00:06:58.888 06:46:03 -- event/cpu_locks.sh@15 -- # [[ -z 67807 ]] 00:06:58.888 06:46:03 -- event/cpu_locks.sh@15 -- # killprocess 67807 00:06:58.888 06:46:03 -- common/autotest_common.sh@936 -- # '[' -z 67807 ']' 00:06:58.888 06:46:03 -- common/autotest_common.sh@940 -- # kill -0 67807 00:06:58.888 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (67807) - No such process 00:06:58.888 Process with pid 67807 is not found 00:06:58.888 Process with pid 67825 is not found 00:06:58.888 06:46:03 -- common/autotest_common.sh@963 -- # echo 'Process with pid 67807 is not found' 00:06:58.888 06:46:03 -- event/cpu_locks.sh@16 -- # [[ -z 67825 ]] 00:06:58.888 06:46:03 -- event/cpu_locks.sh@16 -- # killprocess 67825 00:06:58.888 06:46:03 -- common/autotest_common.sh@936 -- # '[' -z 67825 ']' 00:06:58.888 06:46:03 -- common/autotest_common.sh@940 -- # kill -0 67825 00:06:58.888 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (67825) - No such process 00:06:58.888 06:46:03 -- common/autotest_common.sh@963 -- # echo 'Process with pid 67825 is not found' 00:06:58.888 06:46:03 -- event/cpu_locks.sh@18 -- # rm -f 00:06:58.888 ************************************ 00:06:58.888 END TEST cpu_locks 00:06:58.888 ************************************ 00:06:58.888 00:06:58.888 real 0m17.962s 00:06:58.888 user 0m33.388s 00:06:58.888 sys 0m3.994s 00:06:58.888 06:46:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:58.888 06:46:03 -- common/autotest_common.sh@10 -- # set +x 00:06:58.888 ************************************ 00:06:58.888 END TEST event 00:06:58.888 ************************************ 00:06:58.888 00:06:58.888 real 0m44.015s 00:06:58.888 user 1m26.679s 00:06:58.888 sys 0m7.063s 00:06:58.888 06:46:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:58.888 06:46:03 -- common/autotest_common.sh@10 -- # set +x 00:06:58.888 06:46:03 -- spdk/autotest.sh@175 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:58.888 06:46:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:58.888 06:46:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:58.888 06:46:03 -- common/autotest_common.sh@10 -- # set +x 00:06:58.888 ************************************ 00:06:58.888 START TEST thread 00:06:58.888 ************************************ 00:06:58.888 06:46:03 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:58.888 * Looking for test storage... 00:06:58.888 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:58.888 06:46:03 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:58.889 06:46:03 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:58.889 06:46:03 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:59.148 06:46:03 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:59.148 06:46:03 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:59.148 06:46:03 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:59.148 06:46:03 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:59.148 06:46:03 -- scripts/common.sh@335 -- # IFS=.-: 00:06:59.148 06:46:03 -- scripts/common.sh@335 -- # read -ra ver1 00:06:59.148 06:46:03 -- scripts/common.sh@336 -- # IFS=.-: 00:06:59.148 06:46:03 -- scripts/common.sh@336 -- # read -ra ver2 00:06:59.148 06:46:03 -- scripts/common.sh@337 -- # local 'op=<' 00:06:59.148 06:46:03 -- scripts/common.sh@339 -- # ver1_l=2 00:06:59.148 06:46:03 -- scripts/common.sh@340 -- # ver2_l=1 00:06:59.148 06:46:03 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:59.148 06:46:03 -- scripts/common.sh@343 -- # case "$op" in 00:06:59.148 06:46:03 -- scripts/common.sh@344 -- # : 1 00:06:59.148 06:46:03 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:59.148 06:46:03 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:59.148 06:46:03 -- scripts/common.sh@364 -- # decimal 1 00:06:59.148 06:46:03 -- scripts/common.sh@352 -- # local d=1 00:06:59.148 06:46:03 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:59.148 06:46:03 -- scripts/common.sh@354 -- # echo 1 00:06:59.148 06:46:03 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:59.148 06:46:03 -- scripts/common.sh@365 -- # decimal 2 00:06:59.148 06:46:03 -- scripts/common.sh@352 -- # local d=2 00:06:59.148 06:46:03 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:59.148 06:46:03 -- scripts/common.sh@354 -- # echo 2 00:06:59.148 06:46:03 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:59.148 06:46:03 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:59.148 06:46:03 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:59.148 06:46:03 -- scripts/common.sh@367 -- # return 0 00:06:59.148 06:46:03 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:59.148 06:46:03 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:59.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.148 --rc genhtml_branch_coverage=1 00:06:59.148 --rc genhtml_function_coverage=1 00:06:59.148 --rc genhtml_legend=1 00:06:59.148 --rc geninfo_all_blocks=1 00:06:59.148 --rc geninfo_unexecuted_blocks=1 00:06:59.148 00:06:59.148 ' 00:06:59.148 06:46:03 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:59.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.148 --rc genhtml_branch_coverage=1 00:06:59.148 --rc genhtml_function_coverage=1 00:06:59.148 --rc genhtml_legend=1 00:06:59.148 --rc geninfo_all_blocks=1 00:06:59.148 --rc geninfo_unexecuted_blocks=1 00:06:59.148 00:06:59.148 ' 00:06:59.148 06:46:03 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:59.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.148 --rc genhtml_branch_coverage=1 00:06:59.148 --rc genhtml_function_coverage=1 00:06:59.148 --rc genhtml_legend=1 00:06:59.148 --rc geninfo_all_blocks=1 00:06:59.148 --rc geninfo_unexecuted_blocks=1 00:06:59.148 00:06:59.148 ' 00:06:59.148 06:46:03 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:59.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.148 --rc genhtml_branch_coverage=1 00:06:59.148 --rc genhtml_function_coverage=1 00:06:59.148 --rc genhtml_legend=1 00:06:59.148 --rc geninfo_all_blocks=1 00:06:59.148 --rc geninfo_unexecuted_blocks=1 00:06:59.148 00:06:59.148 ' 00:06:59.148 06:46:03 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:59.148 06:46:03 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:59.148 06:46:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:59.148 06:46:03 -- common/autotest_common.sh@10 -- # set +x 00:06:59.148 ************************************ 00:06:59.148 START TEST thread_poller_perf 00:06:59.148 ************************************ 00:06:59.148 06:46:03 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:59.148 [2024-12-13 06:46:03.452941] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:59.148 [2024-12-13 06:46:03.453024] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67962 ] 00:06:59.148 [2024-12-13 06:46:03.585449] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.148 [2024-12-13 06:46:03.615683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.148 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:00.527 [2024-12-13T06:46:05.046Z] ====================================== 00:07:00.527 [2024-12-13T06:46:05.046Z] busy:2207153104 (cyc) 00:07:00.527 [2024-12-13T06:46:05.046Z] total_run_count: 354000 00:07:00.527 [2024-12-13T06:46:05.046Z] tsc_hz: 2200000000 (cyc) 00:07:00.527 [2024-12-13T06:46:05.046Z] ====================================== 00:07:00.527 [2024-12-13T06:46:05.046Z] poller_cost: 6234 (cyc), 2833 (nsec) 00:07:00.527 00:07:00.527 real 0m1.234s 00:07:00.527 user 0m1.090s 00:07:00.527 sys 0m0.037s 00:07:00.527 06:46:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:00.527 ************************************ 00:07:00.527 END TEST thread_poller_perf 00:07:00.527 ************************************ 00:07:00.527 06:46:04 -- common/autotest_common.sh@10 -- # set +x 00:07:00.527 06:46:04 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:00.527 06:46:04 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:00.527 06:46:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:00.527 06:46:04 -- common/autotest_common.sh@10 -- # set +x 00:07:00.527 ************************************ 00:07:00.527 START TEST thread_poller_perf 00:07:00.527 ************************************ 00:07:00.527 06:46:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:00.527 [2024-12-13 06:46:04.737080] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:00.527 [2024-12-13 06:46:04.737364] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67992 ] 00:07:00.527 [2024-12-13 06:46:04.872581] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.527 [2024-12-13 06:46:04.902269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.527 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:01.472 [2024-12-13T06:46:05.991Z] ====================================== 00:07:01.472 [2024-12-13T06:46:05.991Z] busy:2202515562 (cyc) 00:07:01.472 [2024-12-13T06:46:05.991Z] total_run_count: 4870000 00:07:01.472 [2024-12-13T06:46:05.991Z] tsc_hz: 2200000000 (cyc) 00:07:01.472 [2024-12-13T06:46:05.991Z] ====================================== 00:07:01.472 [2024-12-13T06:46:05.991Z] poller_cost: 452 (cyc), 205 (nsec) 00:07:01.472 ************************************ 00:07:01.472 END TEST thread_poller_perf 00:07:01.472 ************************************ 00:07:01.472 00:07:01.472 real 0m1.230s 00:07:01.472 user 0m1.078s 00:07:01.472 sys 0m0.045s 00:07:01.472 06:46:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:01.472 06:46:05 -- common/autotest_common.sh@10 -- # set +x 00:07:01.731 06:46:05 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:01.731 00:07:01.731 real 0m2.730s 00:07:01.731 user 0m2.309s 00:07:01.731 sys 0m0.199s 00:07:01.731 ************************************ 00:07:01.731 END TEST thread 00:07:01.731 ************************************ 00:07:01.731 06:46:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:01.731 06:46:05 -- common/autotest_common.sh@10 -- # set +x 00:07:01.731 06:46:06 -- spdk/autotest.sh@176 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:01.731 06:46:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:01.731 06:46:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:01.731 06:46:06 -- common/autotest_common.sh@10 -- # set +x 00:07:01.731 ************************************ 00:07:01.731 START TEST accel 00:07:01.731 ************************************ 00:07:01.731 06:46:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:01.731 * Looking for test storage... 00:07:01.731 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:01.731 06:46:06 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:01.731 06:46:06 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:01.731 06:46:06 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:01.731 06:46:06 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:01.731 06:46:06 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:01.731 06:46:06 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:01.731 06:46:06 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:01.731 06:46:06 -- scripts/common.sh@335 -- # IFS=.-: 00:07:01.731 06:46:06 -- scripts/common.sh@335 -- # read -ra ver1 00:07:01.731 06:46:06 -- scripts/common.sh@336 -- # IFS=.-: 00:07:01.731 06:46:06 -- scripts/common.sh@336 -- # read -ra ver2 00:07:01.731 06:46:06 -- scripts/common.sh@337 -- # local 'op=<' 00:07:01.731 06:46:06 -- scripts/common.sh@339 -- # ver1_l=2 00:07:01.731 06:46:06 -- scripts/common.sh@340 -- # ver2_l=1 00:07:01.731 06:46:06 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:01.731 06:46:06 -- scripts/common.sh@343 -- # case "$op" in 00:07:01.731 06:46:06 -- scripts/common.sh@344 -- # : 1 00:07:01.731 06:46:06 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:01.731 06:46:06 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:01.731 06:46:06 -- scripts/common.sh@364 -- # decimal 1 00:07:01.731 06:46:06 -- scripts/common.sh@352 -- # local d=1 00:07:01.731 06:46:06 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:01.731 06:46:06 -- scripts/common.sh@354 -- # echo 1 00:07:01.732 06:46:06 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:01.732 06:46:06 -- scripts/common.sh@365 -- # decimal 2 00:07:01.732 06:46:06 -- scripts/common.sh@352 -- # local d=2 00:07:01.732 06:46:06 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:01.732 06:46:06 -- scripts/common.sh@354 -- # echo 2 00:07:01.732 06:46:06 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:01.732 06:46:06 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:01.732 06:46:06 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:01.732 06:46:06 -- scripts/common.sh@367 -- # return 0 00:07:01.732 06:46:06 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:01.732 06:46:06 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:01.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.732 --rc genhtml_branch_coverage=1 00:07:01.732 --rc genhtml_function_coverage=1 00:07:01.732 --rc genhtml_legend=1 00:07:01.732 --rc geninfo_all_blocks=1 00:07:01.732 --rc geninfo_unexecuted_blocks=1 00:07:01.732 00:07:01.732 ' 00:07:01.732 06:46:06 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:01.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.732 --rc genhtml_branch_coverage=1 00:07:01.732 --rc genhtml_function_coverage=1 00:07:01.732 --rc genhtml_legend=1 00:07:01.732 --rc geninfo_all_blocks=1 00:07:01.732 --rc geninfo_unexecuted_blocks=1 00:07:01.732 00:07:01.732 ' 00:07:01.732 06:46:06 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:01.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.732 --rc genhtml_branch_coverage=1 00:07:01.732 --rc genhtml_function_coverage=1 00:07:01.732 --rc genhtml_legend=1 00:07:01.732 --rc geninfo_all_blocks=1 00:07:01.732 --rc geninfo_unexecuted_blocks=1 00:07:01.732 00:07:01.732 ' 00:07:01.732 06:46:06 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:01.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.732 --rc genhtml_branch_coverage=1 00:07:01.732 --rc genhtml_function_coverage=1 00:07:01.732 --rc genhtml_legend=1 00:07:01.732 --rc geninfo_all_blocks=1 00:07:01.732 --rc geninfo_unexecuted_blocks=1 00:07:01.732 00:07:01.732 ' 00:07:01.732 06:46:06 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:07:01.732 06:46:06 -- accel/accel.sh@74 -- # get_expected_opcs 00:07:01.732 06:46:06 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:01.732 06:46:06 -- accel/accel.sh@59 -- # spdk_tgt_pid=68068 00:07:01.732 06:46:06 -- accel/accel.sh@60 -- # waitforlisten 68068 00:07:01.732 06:46:06 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:01.732 06:46:06 -- common/autotest_common.sh@829 -- # '[' -z 68068 ']' 00:07:01.732 06:46:06 -- accel/accel.sh@58 -- # build_accel_config 00:07:01.732 06:46:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:01.732 06:46:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.732 06:46:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:01.732 06:46:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.732 06:46:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.732 06:46:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.732 06:46:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:01.732 06:46:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:01.732 06:46:06 -- common/autotest_common.sh@10 -- # set +x 00:07:01.732 06:46:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:01.732 06:46:06 -- accel/accel.sh@41 -- # local IFS=, 00:07:01.732 06:46:06 -- accel/accel.sh@42 -- # jq -r . 00:07:01.991 [2024-12-13 06:46:06.296943] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:01.991 [2024-12-13 06:46:06.297842] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68068 ] 00:07:01.991 [2024-12-13 06:46:06.436255] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.991 [2024-12-13 06:46:06.472731] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:01.991 [2024-12-13 06:46:06.473098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.929 06:46:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:02.929 06:46:07 -- common/autotest_common.sh@862 -- # return 0 00:07:02.929 06:46:07 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:02.929 06:46:07 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:07:02.929 06:46:07 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:02.929 06:46:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.929 06:46:07 -- common/autotest_common.sh@10 -- # set +x 00:07:02.929 06:46:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.929 06:46:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:02.929 06:46:07 -- accel/accel.sh@64 -- # IFS== 00:07:02.929 06:46:07 -- accel/accel.sh@64 -- # read -r opc module 00:07:02.929 06:46:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:02.929 06:46:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:02.929 06:46:07 -- accel/accel.sh@64 -- # IFS== 00:07:02.929 06:46:07 -- accel/accel.sh@64 -- # read -r opc module 00:07:02.929 06:46:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:02.929 06:46:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:02.929 06:46:07 -- accel/accel.sh@64 -- # IFS== 00:07:02.929 06:46:07 -- accel/accel.sh@64 -- # read -r opc module 00:07:02.929 06:46:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:02.929 06:46:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:02.929 06:46:07 -- accel/accel.sh@64 -- # IFS== 00:07:02.929 06:46:07 -- accel/accel.sh@64 -- # read -r opc module 00:07:02.929 06:46:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:02.929 06:46:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:02.929 06:46:07 -- accel/accel.sh@64 -- # IFS== 00:07:02.929 06:46:07 -- accel/accel.sh@64 -- # read -r opc module 00:07:02.929 06:46:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:02.929 06:46:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:02.929 06:46:07 -- accel/accel.sh@64 -- # IFS== 00:07:02.929 06:46:07 -- accel/accel.sh@64 -- # read -r opc module 00:07:02.929 06:46:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:02.929 06:46:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:02.929 06:46:07 -- accel/accel.sh@64 -- # IFS== 00:07:02.929 06:46:07 -- accel/accel.sh@64 -- # read -r opc module 00:07:02.929 06:46:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:02.929 06:46:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:02.929 06:46:07 -- accel/accel.sh@64 -- # IFS== 00:07:02.929 06:46:07 -- accel/accel.sh@64 -- # read -r opc module 00:07:02.929 06:46:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:02.929 06:46:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:02.929 06:46:07 -- accel/accel.sh@64 -- # IFS== 00:07:02.929 06:46:07 -- accel/accel.sh@64 -- # read -r opc module 00:07:02.929 06:46:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:02.929 06:46:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:02.929 06:46:07 -- accel/accel.sh@64 -- # IFS== 00:07:02.929 06:46:07 -- accel/accel.sh@64 -- # read -r opc module 00:07:02.929 06:46:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:02.929 06:46:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:02.929 06:46:07 -- accel/accel.sh@64 -- # IFS== 00:07:02.929 06:46:07 -- accel/accel.sh@64 -- # read -r opc module 00:07:02.929 06:46:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:02.929 06:46:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:02.929 06:46:07 -- accel/accel.sh@64 -- # IFS== 00:07:02.929 06:46:07 -- accel/accel.sh@64 -- # read -r opc module 00:07:02.929 06:46:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:02.929 06:46:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:02.929 06:46:07 -- accel/accel.sh@64 -- # IFS== 00:07:02.929 06:46:07 -- accel/accel.sh@64 -- # read -r opc module 00:07:02.929 06:46:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:02.929 06:46:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:02.929 06:46:07 -- accel/accel.sh@64 -- # IFS== 00:07:02.929 06:46:07 -- accel/accel.sh@64 -- # read -r opc module 00:07:02.929 06:46:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:02.929 06:46:07 -- accel/accel.sh@67 -- # killprocess 68068 00:07:02.929 06:46:07 -- common/autotest_common.sh@936 -- # '[' -z 68068 ']' 00:07:02.929 06:46:07 -- common/autotest_common.sh@940 -- # kill -0 68068 00:07:02.929 06:46:07 -- common/autotest_common.sh@941 -- # uname 00:07:02.929 06:46:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:02.929 06:46:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68068 00:07:02.929 killing process with pid 68068 00:07:02.929 06:46:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:02.929 06:46:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:02.929 06:46:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68068' 00:07:02.929 06:46:07 -- common/autotest_common.sh@955 -- # kill 68068 00:07:02.929 06:46:07 -- common/autotest_common.sh@960 -- # wait 68068 00:07:03.188 06:46:07 -- accel/accel.sh@68 -- # trap - ERR 00:07:03.188 06:46:07 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:07:03.188 06:46:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:03.188 06:46:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:03.188 06:46:07 -- common/autotest_common.sh@10 -- # set +x 00:07:03.188 06:46:07 -- common/autotest_common.sh@1114 -- # accel_perf -h 00:07:03.188 06:46:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:03.188 06:46:07 -- accel/accel.sh@12 -- # build_accel_config 00:07:03.188 06:46:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:03.188 06:46:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.188 06:46:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.188 06:46:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:03.188 06:46:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:03.188 06:46:07 -- accel/accel.sh@41 -- # local IFS=, 00:07:03.188 06:46:07 -- accel/accel.sh@42 -- # jq -r . 00:07:03.188 06:46:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:03.188 06:46:07 -- common/autotest_common.sh@10 -- # set +x 00:07:03.188 06:46:07 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:03.188 06:46:07 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:03.188 06:46:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:03.188 06:46:07 -- common/autotest_common.sh@10 -- # set +x 00:07:03.188 ************************************ 00:07:03.188 START TEST accel_missing_filename 00:07:03.188 ************************************ 00:07:03.188 06:46:07 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress 00:07:03.188 06:46:07 -- common/autotest_common.sh@650 -- # local es=0 00:07:03.188 06:46:07 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:03.188 06:46:07 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:07:03.188 06:46:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.188 06:46:07 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:07:03.188 06:46:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.188 06:46:07 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:07:03.188 06:46:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:03.188 06:46:07 -- accel/accel.sh@12 -- # build_accel_config 00:07:03.188 06:46:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:03.188 06:46:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.188 06:46:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.188 06:46:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:03.188 06:46:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:03.188 06:46:07 -- accel/accel.sh@41 -- # local IFS=, 00:07:03.188 06:46:07 -- accel/accel.sh@42 -- # jq -r . 00:07:03.448 [2024-12-13 06:46:07.715169] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:03.448 [2024-12-13 06:46:07.715259] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68125 ] 00:07:03.448 [2024-12-13 06:46:07.853911] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.448 [2024-12-13 06:46:07.885866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.448 [2024-12-13 06:46:07.915865] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:03.448 [2024-12-13 06:46:07.956607] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:07:03.707 A filename is required. 00:07:03.707 06:46:08 -- common/autotest_common.sh@653 -- # es=234 00:07:03.707 06:46:08 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:03.707 06:46:08 -- common/autotest_common.sh@662 -- # es=106 00:07:03.707 ************************************ 00:07:03.707 END TEST accel_missing_filename 00:07:03.707 ************************************ 00:07:03.707 06:46:08 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:03.707 06:46:08 -- common/autotest_common.sh@670 -- # es=1 00:07:03.707 06:46:08 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:03.707 00:07:03.707 real 0m0.334s 00:07:03.707 user 0m0.209s 00:07:03.707 sys 0m0.069s 00:07:03.707 06:46:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:03.707 06:46:08 -- common/autotest_common.sh@10 -- # set +x 00:07:03.707 06:46:08 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:03.707 06:46:08 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:07:03.707 06:46:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:03.707 06:46:08 -- common/autotest_common.sh@10 -- # set +x 00:07:03.707 ************************************ 00:07:03.707 START TEST accel_compress_verify 00:07:03.707 ************************************ 00:07:03.707 06:46:08 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:03.707 06:46:08 -- common/autotest_common.sh@650 -- # local es=0 00:07:03.707 06:46:08 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:03.707 06:46:08 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:07:03.707 06:46:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.707 06:46:08 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:07:03.707 06:46:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.707 06:46:08 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:03.707 06:46:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:03.707 06:46:08 -- accel/accel.sh@12 -- # build_accel_config 00:07:03.707 06:46:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:03.707 06:46:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.707 06:46:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.707 06:46:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:03.707 06:46:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:03.707 06:46:08 -- accel/accel.sh@41 -- # local IFS=, 00:07:03.707 06:46:08 -- accel/accel.sh@42 -- # jq -r . 00:07:03.707 [2024-12-13 06:46:08.096313] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:03.707 [2024-12-13 06:46:08.096642] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68144 ] 00:07:03.707 [2024-12-13 06:46:08.225601] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.966 [2024-12-13 06:46:08.256158] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.966 [2024-12-13 06:46:08.284951] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:03.966 [2024-12-13 06:46:08.326141] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:07:03.966 00:07:03.967 Compression does not support the verify option, aborting. 00:07:03.967 06:46:08 -- common/autotest_common.sh@653 -- # es=161 00:07:03.967 06:46:08 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:03.967 06:46:08 -- common/autotest_common.sh@662 -- # es=33 00:07:03.967 ************************************ 00:07:03.967 END TEST accel_compress_verify 00:07:03.967 ************************************ 00:07:03.967 06:46:08 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:03.967 06:46:08 -- common/autotest_common.sh@670 -- # es=1 00:07:03.967 06:46:08 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:03.967 00:07:03.967 real 0m0.306s 00:07:03.967 user 0m0.177s 00:07:03.967 sys 0m0.071s 00:07:03.967 06:46:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:03.967 06:46:08 -- common/autotest_common.sh@10 -- # set +x 00:07:03.967 06:46:08 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:03.967 06:46:08 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:03.967 06:46:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:03.967 06:46:08 -- common/autotest_common.sh@10 -- # set +x 00:07:03.967 ************************************ 00:07:03.967 START TEST accel_wrong_workload 00:07:03.967 ************************************ 00:07:03.967 06:46:08 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar 00:07:03.967 06:46:08 -- common/autotest_common.sh@650 -- # local es=0 00:07:03.967 06:46:08 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:03.967 06:46:08 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:07:03.967 06:46:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.967 06:46:08 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:07:03.967 06:46:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.967 06:46:08 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:07:03.967 06:46:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:03.967 06:46:08 -- accel/accel.sh@12 -- # build_accel_config 00:07:03.967 06:46:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:03.967 06:46:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.967 06:46:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.967 06:46:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:03.967 06:46:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:03.967 06:46:08 -- accel/accel.sh@41 -- # local IFS=, 00:07:03.967 06:46:08 -- accel/accel.sh@42 -- # jq -r . 00:07:03.967 Unsupported workload type: foobar 00:07:03.967 [2024-12-13 06:46:08.448704] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:03.967 accel_perf options: 00:07:03.967 [-h help message] 00:07:03.967 [-q queue depth per core] 00:07:03.967 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:03.967 [-T number of threads per core 00:07:03.967 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:03.967 [-t time in seconds] 00:07:03.967 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:03.967 [ dif_verify, , dif_generate, dif_generate_copy 00:07:03.967 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:03.967 [-l for compress/decompress workloads, name of uncompressed input file 00:07:03.967 [-S for crc32c workload, use this seed value (default 0) 00:07:03.967 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:03.967 [-f for fill workload, use this BYTE value (default 255) 00:07:03.967 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:03.967 [-y verify result if this switch is on] 00:07:03.967 [-a tasks to allocate per core (default: same value as -q)] 00:07:03.967 Can be used to spread operations across a wider range of memory. 00:07:03.967 06:46:08 -- common/autotest_common.sh@653 -- # es=1 00:07:03.967 06:46:08 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:03.967 06:46:08 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:03.967 06:46:08 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:03.967 00:07:03.967 real 0m0.030s 00:07:03.967 user 0m0.018s 00:07:03.967 sys 0m0.011s 00:07:03.967 ************************************ 00:07:03.967 END TEST accel_wrong_workload 00:07:03.967 ************************************ 00:07:03.967 06:46:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:03.967 06:46:08 -- common/autotest_common.sh@10 -- # set +x 00:07:04.226 06:46:08 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:04.226 06:46:08 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:07:04.226 06:46:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:04.226 06:46:08 -- common/autotest_common.sh@10 -- # set +x 00:07:04.226 ************************************ 00:07:04.226 START TEST accel_negative_buffers 00:07:04.226 ************************************ 00:07:04.226 06:46:08 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:04.226 06:46:08 -- common/autotest_common.sh@650 -- # local es=0 00:07:04.226 06:46:08 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:04.226 06:46:08 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:07:04.226 06:46:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.226 06:46:08 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:07:04.226 06:46:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.226 06:46:08 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:07:04.226 06:46:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:04.226 06:46:08 -- accel/accel.sh@12 -- # build_accel_config 00:07:04.226 06:46:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:04.226 06:46:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.226 06:46:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.226 06:46:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:04.226 06:46:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:04.226 06:46:08 -- accel/accel.sh@41 -- # local IFS=, 00:07:04.226 06:46:08 -- accel/accel.sh@42 -- # jq -r . 00:07:04.226 -x option must be non-negative. 00:07:04.226 [2024-12-13 06:46:08.525428] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:04.226 accel_perf options: 00:07:04.226 [-h help message] 00:07:04.226 [-q queue depth per core] 00:07:04.226 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:04.226 [-T number of threads per core 00:07:04.226 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:04.226 [-t time in seconds] 00:07:04.226 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:04.226 [ dif_verify, , dif_generate, dif_generate_copy 00:07:04.226 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:04.226 [-l for compress/decompress workloads, name of uncompressed input file 00:07:04.226 [-S for crc32c workload, use this seed value (default 0) 00:07:04.226 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:04.226 [-f for fill workload, use this BYTE value (default 255) 00:07:04.226 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:04.226 [-y verify result if this switch is on] 00:07:04.226 [-a tasks to allocate per core (default: same value as -q)] 00:07:04.226 Can be used to spread operations across a wider range of memory. 00:07:04.227 06:46:08 -- common/autotest_common.sh@653 -- # es=1 00:07:04.227 06:46:08 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:04.227 06:46:08 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:04.227 06:46:08 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:04.227 00:07:04.227 real 0m0.030s 00:07:04.227 user 0m0.012s 00:07:04.227 sys 0m0.015s 00:07:04.227 ************************************ 00:07:04.227 END TEST accel_negative_buffers 00:07:04.227 ************************************ 00:07:04.227 06:46:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:04.227 06:46:08 -- common/autotest_common.sh@10 -- # set +x 00:07:04.227 06:46:08 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:04.227 06:46:08 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:04.227 06:46:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:04.227 06:46:08 -- common/autotest_common.sh@10 -- # set +x 00:07:04.227 ************************************ 00:07:04.227 START TEST accel_crc32c 00:07:04.227 ************************************ 00:07:04.227 06:46:08 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:04.227 06:46:08 -- accel/accel.sh@16 -- # local accel_opc 00:07:04.227 06:46:08 -- accel/accel.sh@17 -- # local accel_module 00:07:04.227 06:46:08 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:04.227 06:46:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:04.227 06:46:08 -- accel/accel.sh@12 -- # build_accel_config 00:07:04.227 06:46:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:04.227 06:46:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.227 06:46:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.227 06:46:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:04.227 06:46:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:04.227 06:46:08 -- accel/accel.sh@41 -- # local IFS=, 00:07:04.227 06:46:08 -- accel/accel.sh@42 -- # jq -r . 00:07:04.227 [2024-12-13 06:46:08.602193] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:04.227 [2024-12-13 06:46:08.602479] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68209 ] 00:07:04.227 [2024-12-13 06:46:08.740796] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.486 [2024-12-13 06:46:08.771801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.422 06:46:09 -- accel/accel.sh@18 -- # out=' 00:07:05.422 SPDK Configuration: 00:07:05.422 Core mask: 0x1 00:07:05.422 00:07:05.422 Accel Perf Configuration: 00:07:05.422 Workload Type: crc32c 00:07:05.423 CRC-32C seed: 32 00:07:05.423 Transfer size: 4096 bytes 00:07:05.423 Vector count 1 00:07:05.423 Module: software 00:07:05.423 Queue depth: 32 00:07:05.423 Allocate depth: 32 00:07:05.423 # threads/core: 1 00:07:05.423 Run time: 1 seconds 00:07:05.423 Verify: Yes 00:07:05.423 00:07:05.423 Running for 1 seconds... 00:07:05.423 00:07:05.423 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:05.423 ------------------------------------------------------------------------------------ 00:07:05.423 0,0 521504/s 2037 MiB/s 0 0 00:07:05.423 ==================================================================================== 00:07:05.423 Total 521504/s 2037 MiB/s 0 0' 00:07:05.423 06:46:09 -- accel/accel.sh@20 -- # IFS=: 00:07:05.423 06:46:09 -- accel/accel.sh@20 -- # read -r var val 00:07:05.423 06:46:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:05.423 06:46:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:05.423 06:46:09 -- accel/accel.sh@12 -- # build_accel_config 00:07:05.423 06:46:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:05.423 06:46:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.423 06:46:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.423 06:46:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:05.423 06:46:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:05.423 06:46:09 -- accel/accel.sh@41 -- # local IFS=, 00:07:05.423 06:46:09 -- accel/accel.sh@42 -- # jq -r . 00:07:05.423 [2024-12-13 06:46:09.923055] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:05.423 [2024-12-13 06:46:09.923148] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68223 ] 00:07:05.681 [2024-12-13 06:46:10.057899] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.681 [2024-12-13 06:46:10.088861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.681 06:46:10 -- accel/accel.sh@21 -- # val= 00:07:05.681 06:46:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.681 06:46:10 -- accel/accel.sh@20 -- # IFS=: 00:07:05.681 06:46:10 -- accel/accel.sh@20 -- # read -r var val 00:07:05.681 06:46:10 -- accel/accel.sh@21 -- # val= 00:07:05.681 06:46:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.681 06:46:10 -- accel/accel.sh@20 -- # IFS=: 00:07:05.681 06:46:10 -- accel/accel.sh@20 -- # read -r var val 00:07:05.681 06:46:10 -- accel/accel.sh@21 -- # val=0x1 00:07:05.681 06:46:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.681 06:46:10 -- accel/accel.sh@20 -- # IFS=: 00:07:05.681 06:46:10 -- accel/accel.sh@20 -- # read -r var val 00:07:05.681 06:46:10 -- accel/accel.sh@21 -- # val= 00:07:05.681 06:46:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.681 06:46:10 -- accel/accel.sh@20 -- # IFS=: 00:07:05.681 06:46:10 -- accel/accel.sh@20 -- # read -r var val 00:07:05.681 06:46:10 -- accel/accel.sh@21 -- # val= 00:07:05.681 06:46:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.681 06:46:10 -- accel/accel.sh@20 -- # IFS=: 00:07:05.681 06:46:10 -- accel/accel.sh@20 -- # read -r var val 00:07:05.681 06:46:10 -- accel/accel.sh@21 -- # val=crc32c 00:07:05.681 06:46:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.681 06:46:10 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:07:05.681 06:46:10 -- accel/accel.sh@20 -- # IFS=: 00:07:05.681 06:46:10 -- accel/accel.sh@20 -- # read -r var val 00:07:05.681 06:46:10 -- accel/accel.sh@21 -- # val=32 00:07:05.681 06:46:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.681 06:46:10 -- accel/accel.sh@20 -- # IFS=: 00:07:05.682 06:46:10 -- accel/accel.sh@20 -- # read -r var val 00:07:05.682 06:46:10 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:05.682 06:46:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.682 06:46:10 -- accel/accel.sh@20 -- # IFS=: 00:07:05.682 06:46:10 -- accel/accel.sh@20 -- # read -r var val 00:07:05.682 06:46:10 -- accel/accel.sh@21 -- # val= 00:07:05.682 06:46:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.682 06:46:10 -- accel/accel.sh@20 -- # IFS=: 00:07:05.682 06:46:10 -- accel/accel.sh@20 -- # read -r var val 00:07:05.682 06:46:10 -- accel/accel.sh@21 -- # val=software 00:07:05.682 06:46:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.682 06:46:10 -- accel/accel.sh@23 -- # accel_module=software 00:07:05.682 06:46:10 -- accel/accel.sh@20 -- # IFS=: 00:07:05.682 06:46:10 -- accel/accel.sh@20 -- # read -r var val 00:07:05.682 06:46:10 -- accel/accel.sh@21 -- # val=32 00:07:05.682 06:46:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.682 06:46:10 -- accel/accel.sh@20 -- # IFS=: 00:07:05.682 06:46:10 -- accel/accel.sh@20 -- # read -r var val 00:07:05.682 06:46:10 -- accel/accel.sh@21 -- # val=32 00:07:05.682 06:46:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.682 06:46:10 -- accel/accel.sh@20 -- # IFS=: 00:07:05.682 06:46:10 -- accel/accel.sh@20 -- # read -r var val 00:07:05.682 06:46:10 -- accel/accel.sh@21 -- # val=1 00:07:05.682 06:46:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.682 06:46:10 -- accel/accel.sh@20 -- # IFS=: 00:07:05.682 06:46:10 -- accel/accel.sh@20 -- # read -r var val 00:07:05.682 06:46:10 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:05.682 06:46:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.682 06:46:10 -- accel/accel.sh@20 -- # IFS=: 00:07:05.682 06:46:10 -- accel/accel.sh@20 -- # read -r var val 00:07:05.682 06:46:10 -- accel/accel.sh@21 -- # val=Yes 00:07:05.682 06:46:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.682 06:46:10 -- accel/accel.sh@20 -- # IFS=: 00:07:05.682 06:46:10 -- accel/accel.sh@20 -- # read -r var val 00:07:05.682 06:46:10 -- accel/accel.sh@21 -- # val= 00:07:05.682 06:46:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.682 06:46:10 -- accel/accel.sh@20 -- # IFS=: 00:07:05.682 06:46:10 -- accel/accel.sh@20 -- # read -r var val 00:07:05.682 06:46:10 -- accel/accel.sh@21 -- # val= 00:07:05.682 06:46:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.682 06:46:10 -- accel/accel.sh@20 -- # IFS=: 00:07:05.682 06:46:10 -- accel/accel.sh@20 -- # read -r var val 00:07:07.060 06:46:11 -- accel/accel.sh@21 -- # val= 00:07:07.060 06:46:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.060 06:46:11 -- accel/accel.sh@20 -- # IFS=: 00:07:07.060 06:46:11 -- accel/accel.sh@20 -- # read -r var val 00:07:07.060 06:46:11 -- accel/accel.sh@21 -- # val= 00:07:07.060 06:46:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.060 06:46:11 -- accel/accel.sh@20 -- # IFS=: 00:07:07.060 06:46:11 -- accel/accel.sh@20 -- # read -r var val 00:07:07.060 06:46:11 -- accel/accel.sh@21 -- # val= 00:07:07.060 06:46:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.060 06:46:11 -- accel/accel.sh@20 -- # IFS=: 00:07:07.060 06:46:11 -- accel/accel.sh@20 -- # read -r var val 00:07:07.060 06:46:11 -- accel/accel.sh@21 -- # val= 00:07:07.060 06:46:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.060 06:46:11 -- accel/accel.sh@20 -- # IFS=: 00:07:07.060 ************************************ 00:07:07.060 END TEST accel_crc32c 00:07:07.060 ************************************ 00:07:07.060 06:46:11 -- accel/accel.sh@20 -- # read -r var val 00:07:07.060 06:46:11 -- accel/accel.sh@21 -- # val= 00:07:07.060 06:46:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.060 06:46:11 -- accel/accel.sh@20 -- # IFS=: 00:07:07.060 06:46:11 -- accel/accel.sh@20 -- # read -r var val 00:07:07.060 06:46:11 -- accel/accel.sh@21 -- # val= 00:07:07.060 06:46:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.060 06:46:11 -- accel/accel.sh@20 -- # IFS=: 00:07:07.060 06:46:11 -- accel/accel.sh@20 -- # read -r var val 00:07:07.060 06:46:11 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:07.060 06:46:11 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:07:07.060 06:46:11 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:07.060 00:07:07.060 real 0m2.629s 00:07:07.060 user 0m2.292s 00:07:07.060 sys 0m0.137s 00:07:07.060 06:46:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:07.060 06:46:11 -- common/autotest_common.sh@10 -- # set +x 00:07:07.060 06:46:11 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:07.060 06:46:11 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:07.060 06:46:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:07.060 06:46:11 -- common/autotest_common.sh@10 -- # set +x 00:07:07.060 ************************************ 00:07:07.060 START TEST accel_crc32c_C2 00:07:07.060 ************************************ 00:07:07.060 06:46:11 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:07.060 06:46:11 -- accel/accel.sh@16 -- # local accel_opc 00:07:07.060 06:46:11 -- accel/accel.sh@17 -- # local accel_module 00:07:07.060 06:46:11 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:07.060 06:46:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:07.060 06:46:11 -- accel/accel.sh@12 -- # build_accel_config 00:07:07.060 06:46:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:07.060 06:46:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.060 06:46:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.060 06:46:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:07.060 06:46:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:07.060 06:46:11 -- accel/accel.sh@41 -- # local IFS=, 00:07:07.060 06:46:11 -- accel/accel.sh@42 -- # jq -r . 00:07:07.060 [2024-12-13 06:46:11.287191] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:07.060 [2024-12-13 06:46:11.287279] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68252 ] 00:07:07.060 [2024-12-13 06:46:11.423146] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.060 [2024-12-13 06:46:11.453788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.439 06:46:12 -- accel/accel.sh@18 -- # out=' 00:07:08.439 SPDK Configuration: 00:07:08.439 Core mask: 0x1 00:07:08.439 00:07:08.439 Accel Perf Configuration: 00:07:08.439 Workload Type: crc32c 00:07:08.439 CRC-32C seed: 0 00:07:08.439 Transfer size: 4096 bytes 00:07:08.439 Vector count 2 00:07:08.439 Module: software 00:07:08.439 Queue depth: 32 00:07:08.439 Allocate depth: 32 00:07:08.439 # threads/core: 1 00:07:08.439 Run time: 1 seconds 00:07:08.439 Verify: Yes 00:07:08.439 00:07:08.439 Running for 1 seconds... 00:07:08.439 00:07:08.439 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:08.439 ------------------------------------------------------------------------------------ 00:07:08.439 0,0 392608/s 3067 MiB/s 0 0 00:07:08.439 ==================================================================================== 00:07:08.439 Total 392608/s 1533 MiB/s 0 0' 00:07:08.439 06:46:12 -- accel/accel.sh@20 -- # IFS=: 00:07:08.439 06:46:12 -- accel/accel.sh@20 -- # read -r var val 00:07:08.439 06:46:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:08.439 06:46:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:08.439 06:46:12 -- accel/accel.sh@12 -- # build_accel_config 00:07:08.439 06:46:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:08.439 06:46:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.439 06:46:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.439 06:46:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:08.439 06:46:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:08.439 06:46:12 -- accel/accel.sh@41 -- # local IFS=, 00:07:08.439 06:46:12 -- accel/accel.sh@42 -- # jq -r . 00:07:08.439 [2024-12-13 06:46:12.601176] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:08.439 [2024-12-13 06:46:12.601263] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68266 ] 00:07:08.439 [2024-12-13 06:46:12.739051] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.439 [2024-12-13 06:46:12.773088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.439 06:46:12 -- accel/accel.sh@21 -- # val= 00:07:08.439 06:46:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.439 06:46:12 -- accel/accel.sh@20 -- # IFS=: 00:07:08.439 06:46:12 -- accel/accel.sh@20 -- # read -r var val 00:07:08.439 06:46:12 -- accel/accel.sh@21 -- # val= 00:07:08.439 06:46:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.439 06:46:12 -- accel/accel.sh@20 -- # IFS=: 00:07:08.439 06:46:12 -- accel/accel.sh@20 -- # read -r var val 00:07:08.439 06:46:12 -- accel/accel.sh@21 -- # val=0x1 00:07:08.439 06:46:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.439 06:46:12 -- accel/accel.sh@20 -- # IFS=: 00:07:08.439 06:46:12 -- accel/accel.sh@20 -- # read -r var val 00:07:08.439 06:46:12 -- accel/accel.sh@21 -- # val= 00:07:08.439 06:46:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.439 06:46:12 -- accel/accel.sh@20 -- # IFS=: 00:07:08.439 06:46:12 -- accel/accel.sh@20 -- # read -r var val 00:07:08.439 06:46:12 -- accel/accel.sh@21 -- # val= 00:07:08.439 06:46:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.439 06:46:12 -- accel/accel.sh@20 -- # IFS=: 00:07:08.439 06:46:12 -- accel/accel.sh@20 -- # read -r var val 00:07:08.439 06:46:12 -- accel/accel.sh@21 -- # val=crc32c 00:07:08.439 06:46:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.439 06:46:12 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:07:08.439 06:46:12 -- accel/accel.sh@20 -- # IFS=: 00:07:08.439 06:46:12 -- accel/accel.sh@20 -- # read -r var val 00:07:08.439 06:46:12 -- accel/accel.sh@21 -- # val=0 00:07:08.439 06:46:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.439 06:46:12 -- accel/accel.sh@20 -- # IFS=: 00:07:08.439 06:46:12 -- accel/accel.sh@20 -- # read -r var val 00:07:08.439 06:46:12 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:08.439 06:46:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.439 06:46:12 -- accel/accel.sh@20 -- # IFS=: 00:07:08.439 06:46:12 -- accel/accel.sh@20 -- # read -r var val 00:07:08.439 06:46:12 -- accel/accel.sh@21 -- # val= 00:07:08.439 06:46:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.439 06:46:12 -- accel/accel.sh@20 -- # IFS=: 00:07:08.439 06:46:12 -- accel/accel.sh@20 -- # read -r var val 00:07:08.439 06:46:12 -- accel/accel.sh@21 -- # val=software 00:07:08.439 06:46:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.439 06:46:12 -- accel/accel.sh@23 -- # accel_module=software 00:07:08.439 06:46:12 -- accel/accel.sh@20 -- # IFS=: 00:07:08.439 06:46:12 -- accel/accel.sh@20 -- # read -r var val 00:07:08.439 06:46:12 -- accel/accel.sh@21 -- # val=32 00:07:08.439 06:46:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.439 06:46:12 -- accel/accel.sh@20 -- # IFS=: 00:07:08.439 06:46:12 -- accel/accel.sh@20 -- # read -r var val 00:07:08.439 06:46:12 -- accel/accel.sh@21 -- # val=32 00:07:08.440 06:46:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.440 06:46:12 -- accel/accel.sh@20 -- # IFS=: 00:07:08.440 06:46:12 -- accel/accel.sh@20 -- # read -r var val 00:07:08.440 06:46:12 -- accel/accel.sh@21 -- # val=1 00:07:08.440 06:46:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.440 06:46:12 -- accel/accel.sh@20 -- # IFS=: 00:07:08.440 06:46:12 -- accel/accel.sh@20 -- # read -r var val 00:07:08.440 06:46:12 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:08.440 06:46:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.440 06:46:12 -- accel/accel.sh@20 -- # IFS=: 00:07:08.440 06:46:12 -- accel/accel.sh@20 -- # read -r var val 00:07:08.440 06:46:12 -- accel/accel.sh@21 -- # val=Yes 00:07:08.440 06:46:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.440 06:46:12 -- accel/accel.sh@20 -- # IFS=: 00:07:08.440 06:46:12 -- accel/accel.sh@20 -- # read -r var val 00:07:08.440 06:46:12 -- accel/accel.sh@21 -- # val= 00:07:08.440 06:46:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.440 06:46:12 -- accel/accel.sh@20 -- # IFS=: 00:07:08.440 06:46:12 -- accel/accel.sh@20 -- # read -r var val 00:07:08.440 06:46:12 -- accel/accel.sh@21 -- # val= 00:07:08.440 06:46:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.440 06:46:12 -- accel/accel.sh@20 -- # IFS=: 00:07:08.440 06:46:12 -- accel/accel.sh@20 -- # read -r var val 00:07:09.406 06:46:13 -- accel/accel.sh@21 -- # val= 00:07:09.406 06:46:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.406 06:46:13 -- accel/accel.sh@20 -- # IFS=: 00:07:09.406 06:46:13 -- accel/accel.sh@20 -- # read -r var val 00:07:09.406 06:46:13 -- accel/accel.sh@21 -- # val= 00:07:09.406 06:46:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.406 06:46:13 -- accel/accel.sh@20 -- # IFS=: 00:07:09.406 06:46:13 -- accel/accel.sh@20 -- # read -r var val 00:07:09.406 06:46:13 -- accel/accel.sh@21 -- # val= 00:07:09.406 06:46:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.406 06:46:13 -- accel/accel.sh@20 -- # IFS=: 00:07:09.406 06:46:13 -- accel/accel.sh@20 -- # read -r var val 00:07:09.406 06:46:13 -- accel/accel.sh@21 -- # val= 00:07:09.406 06:46:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.406 06:46:13 -- accel/accel.sh@20 -- # IFS=: 00:07:09.406 06:46:13 -- accel/accel.sh@20 -- # read -r var val 00:07:09.406 06:46:13 -- accel/accel.sh@21 -- # val= 00:07:09.406 06:46:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.406 06:46:13 -- accel/accel.sh@20 -- # IFS=: 00:07:09.406 06:46:13 -- accel/accel.sh@20 -- # read -r var val 00:07:09.406 06:46:13 -- accel/accel.sh@21 -- # val= 00:07:09.406 06:46:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.406 06:46:13 -- accel/accel.sh@20 -- # IFS=: 00:07:09.406 06:46:13 -- accel/accel.sh@20 -- # read -r var val 00:07:09.406 06:46:13 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:09.406 06:46:13 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:07:09.406 06:46:13 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:09.406 00:07:09.406 real 0m2.628s 00:07:09.406 user 0m2.276s 00:07:09.406 sys 0m0.148s 00:07:09.406 06:46:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:09.406 06:46:13 -- common/autotest_common.sh@10 -- # set +x 00:07:09.406 ************************************ 00:07:09.406 END TEST accel_crc32c_C2 00:07:09.406 ************************************ 00:07:09.665 06:46:13 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:09.665 06:46:13 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:09.665 06:46:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:09.665 06:46:13 -- common/autotest_common.sh@10 -- # set +x 00:07:09.665 ************************************ 00:07:09.665 START TEST accel_copy 00:07:09.665 ************************************ 00:07:09.665 06:46:13 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y 00:07:09.665 06:46:13 -- accel/accel.sh@16 -- # local accel_opc 00:07:09.665 06:46:13 -- accel/accel.sh@17 -- # local accel_module 00:07:09.665 06:46:13 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:07:09.665 06:46:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:09.665 06:46:13 -- accel/accel.sh@12 -- # build_accel_config 00:07:09.665 06:46:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:09.665 06:46:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.665 06:46:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.665 06:46:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:09.665 06:46:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:09.665 06:46:13 -- accel/accel.sh@41 -- # local IFS=, 00:07:09.665 06:46:13 -- accel/accel.sh@42 -- # jq -r . 00:07:09.665 [2024-12-13 06:46:13.965931] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:09.665 [2024-12-13 06:46:13.966013] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68306 ] 00:07:09.665 [2024-12-13 06:46:14.094381] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.665 [2024-12-13 06:46:14.126746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.045 06:46:15 -- accel/accel.sh@18 -- # out=' 00:07:11.045 SPDK Configuration: 00:07:11.045 Core mask: 0x1 00:07:11.045 00:07:11.045 Accel Perf Configuration: 00:07:11.045 Workload Type: copy 00:07:11.045 Transfer size: 4096 bytes 00:07:11.045 Vector count 1 00:07:11.045 Module: software 00:07:11.045 Queue depth: 32 00:07:11.045 Allocate depth: 32 00:07:11.045 # threads/core: 1 00:07:11.045 Run time: 1 seconds 00:07:11.045 Verify: Yes 00:07:11.045 00:07:11.045 Running for 1 seconds... 00:07:11.045 00:07:11.045 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:11.045 ------------------------------------------------------------------------------------ 00:07:11.045 0,0 357440/s 1396 MiB/s 0 0 00:07:11.045 ==================================================================================== 00:07:11.045 Total 357440/s 1396 MiB/s 0 0' 00:07:11.045 06:46:15 -- accel/accel.sh@20 -- # IFS=: 00:07:11.045 06:46:15 -- accel/accel.sh@20 -- # read -r var val 00:07:11.045 06:46:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:11.045 06:46:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:11.045 06:46:15 -- accel/accel.sh@12 -- # build_accel_config 00:07:11.045 06:46:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:11.045 06:46:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.045 06:46:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.045 06:46:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:11.045 06:46:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:11.045 06:46:15 -- accel/accel.sh@41 -- # local IFS=, 00:07:11.045 06:46:15 -- accel/accel.sh@42 -- # jq -r . 00:07:11.045 [2024-12-13 06:46:15.279402] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:11.045 [2024-12-13 06:46:15.279521] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68320 ] 00:07:11.045 [2024-12-13 06:46:15.423728] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.045 [2024-12-13 06:46:15.454082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.045 06:46:15 -- accel/accel.sh@21 -- # val= 00:07:11.045 06:46:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.045 06:46:15 -- accel/accel.sh@20 -- # IFS=: 00:07:11.045 06:46:15 -- accel/accel.sh@20 -- # read -r var val 00:07:11.045 06:46:15 -- accel/accel.sh@21 -- # val= 00:07:11.045 06:46:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.045 06:46:15 -- accel/accel.sh@20 -- # IFS=: 00:07:11.045 06:46:15 -- accel/accel.sh@20 -- # read -r var val 00:07:11.045 06:46:15 -- accel/accel.sh@21 -- # val=0x1 00:07:11.045 06:46:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.045 06:46:15 -- accel/accel.sh@20 -- # IFS=: 00:07:11.045 06:46:15 -- accel/accel.sh@20 -- # read -r var val 00:07:11.045 06:46:15 -- accel/accel.sh@21 -- # val= 00:07:11.045 06:46:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.045 06:46:15 -- accel/accel.sh@20 -- # IFS=: 00:07:11.045 06:46:15 -- accel/accel.sh@20 -- # read -r var val 00:07:11.045 06:46:15 -- accel/accel.sh@21 -- # val= 00:07:11.045 06:46:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.045 06:46:15 -- accel/accel.sh@20 -- # IFS=: 00:07:11.045 06:46:15 -- accel/accel.sh@20 -- # read -r var val 00:07:11.045 06:46:15 -- accel/accel.sh@21 -- # val=copy 00:07:11.045 06:46:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.045 06:46:15 -- accel/accel.sh@24 -- # accel_opc=copy 00:07:11.045 06:46:15 -- accel/accel.sh@20 -- # IFS=: 00:07:11.045 06:46:15 -- accel/accel.sh@20 -- # read -r var val 00:07:11.045 06:46:15 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:11.045 06:46:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.045 06:46:15 -- accel/accel.sh@20 -- # IFS=: 00:07:11.045 06:46:15 -- accel/accel.sh@20 -- # read -r var val 00:07:11.045 06:46:15 -- accel/accel.sh@21 -- # val= 00:07:11.045 06:46:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.045 06:46:15 -- accel/accel.sh@20 -- # IFS=: 00:07:11.045 06:46:15 -- accel/accel.sh@20 -- # read -r var val 00:07:11.045 06:46:15 -- accel/accel.sh@21 -- # val=software 00:07:11.045 06:46:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.045 06:46:15 -- accel/accel.sh@23 -- # accel_module=software 00:07:11.045 06:46:15 -- accel/accel.sh@20 -- # IFS=: 00:07:11.045 06:46:15 -- accel/accel.sh@20 -- # read -r var val 00:07:11.045 06:46:15 -- accel/accel.sh@21 -- # val=32 00:07:11.045 06:46:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.045 06:46:15 -- accel/accel.sh@20 -- # IFS=: 00:07:11.045 06:46:15 -- accel/accel.sh@20 -- # read -r var val 00:07:11.045 06:46:15 -- accel/accel.sh@21 -- # val=32 00:07:11.045 06:46:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.045 06:46:15 -- accel/accel.sh@20 -- # IFS=: 00:07:11.045 06:46:15 -- accel/accel.sh@20 -- # read -r var val 00:07:11.045 06:46:15 -- accel/accel.sh@21 -- # val=1 00:07:11.045 06:46:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.045 06:46:15 -- accel/accel.sh@20 -- # IFS=: 00:07:11.045 06:46:15 -- accel/accel.sh@20 -- # read -r var val 00:07:11.045 06:46:15 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:11.045 06:46:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.045 06:46:15 -- accel/accel.sh@20 -- # IFS=: 00:07:11.045 06:46:15 -- accel/accel.sh@20 -- # read -r var val 00:07:11.045 06:46:15 -- accel/accel.sh@21 -- # val=Yes 00:07:11.045 06:46:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.045 06:46:15 -- accel/accel.sh@20 -- # IFS=: 00:07:11.045 06:46:15 -- accel/accel.sh@20 -- # read -r var val 00:07:11.045 06:46:15 -- accel/accel.sh@21 -- # val= 00:07:11.045 06:46:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.045 06:46:15 -- accel/accel.sh@20 -- # IFS=: 00:07:11.045 06:46:15 -- accel/accel.sh@20 -- # read -r var val 00:07:11.045 06:46:15 -- accel/accel.sh@21 -- # val= 00:07:11.045 06:46:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.045 06:46:15 -- accel/accel.sh@20 -- # IFS=: 00:07:11.045 06:46:15 -- accel/accel.sh@20 -- # read -r var val 00:07:12.424 06:46:16 -- accel/accel.sh@21 -- # val= 00:07:12.424 06:46:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.424 06:46:16 -- accel/accel.sh@20 -- # IFS=: 00:07:12.424 06:46:16 -- accel/accel.sh@20 -- # read -r var val 00:07:12.424 06:46:16 -- accel/accel.sh@21 -- # val= 00:07:12.424 06:46:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.424 06:46:16 -- accel/accel.sh@20 -- # IFS=: 00:07:12.424 06:46:16 -- accel/accel.sh@20 -- # read -r var val 00:07:12.424 06:46:16 -- accel/accel.sh@21 -- # val= 00:07:12.424 06:46:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.424 06:46:16 -- accel/accel.sh@20 -- # IFS=: 00:07:12.424 06:46:16 -- accel/accel.sh@20 -- # read -r var val 00:07:12.424 06:46:16 -- accel/accel.sh@21 -- # val= 00:07:12.424 06:46:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.424 06:46:16 -- accel/accel.sh@20 -- # IFS=: 00:07:12.424 06:46:16 -- accel/accel.sh@20 -- # read -r var val 00:07:12.424 06:46:16 -- accel/accel.sh@21 -- # val= 00:07:12.424 06:46:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.424 06:46:16 -- accel/accel.sh@20 -- # IFS=: 00:07:12.424 06:46:16 -- accel/accel.sh@20 -- # read -r var val 00:07:12.424 06:46:16 -- accel/accel.sh@21 -- # val= 00:07:12.424 06:46:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.424 06:46:16 -- accel/accel.sh@20 -- # IFS=: 00:07:12.424 06:46:16 -- accel/accel.sh@20 -- # read -r var val 00:07:12.424 06:46:16 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:12.424 06:46:16 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:07:12.424 06:46:16 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:12.424 00:07:12.424 real 0m2.637s 00:07:12.424 user 0m2.287s 00:07:12.424 sys 0m0.148s 00:07:12.424 06:46:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:12.424 06:46:16 -- common/autotest_common.sh@10 -- # set +x 00:07:12.424 ************************************ 00:07:12.424 END TEST accel_copy 00:07:12.424 ************************************ 00:07:12.424 06:46:16 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:12.424 06:46:16 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:12.424 06:46:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:12.424 06:46:16 -- common/autotest_common.sh@10 -- # set +x 00:07:12.424 ************************************ 00:07:12.424 START TEST accel_fill 00:07:12.424 ************************************ 00:07:12.424 06:46:16 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:12.424 06:46:16 -- accel/accel.sh@16 -- # local accel_opc 00:07:12.424 06:46:16 -- accel/accel.sh@17 -- # local accel_module 00:07:12.424 06:46:16 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:12.424 06:46:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:12.424 06:46:16 -- accel/accel.sh@12 -- # build_accel_config 00:07:12.424 06:46:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:12.424 06:46:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.424 06:46:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.424 06:46:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:12.424 06:46:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:12.424 06:46:16 -- accel/accel.sh@41 -- # local IFS=, 00:07:12.424 06:46:16 -- accel/accel.sh@42 -- # jq -r . 00:07:12.424 [2024-12-13 06:46:16.669597] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:12.424 [2024-12-13 06:46:16.669876] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68349 ] 00:07:12.424 [2024-12-13 06:46:16.808001] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.424 [2024-12-13 06:46:16.839602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.802 06:46:17 -- accel/accel.sh@18 -- # out=' 00:07:13.802 SPDK Configuration: 00:07:13.802 Core mask: 0x1 00:07:13.802 00:07:13.802 Accel Perf Configuration: 00:07:13.802 Workload Type: fill 00:07:13.802 Fill pattern: 0x80 00:07:13.802 Transfer size: 4096 bytes 00:07:13.802 Vector count 1 00:07:13.802 Module: software 00:07:13.802 Queue depth: 64 00:07:13.802 Allocate depth: 64 00:07:13.802 # threads/core: 1 00:07:13.802 Run time: 1 seconds 00:07:13.802 Verify: Yes 00:07:13.802 00:07:13.802 Running for 1 seconds... 00:07:13.802 00:07:13.802 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:13.802 ------------------------------------------------------------------------------------ 00:07:13.802 0,0 511552/s 1998 MiB/s 0 0 00:07:13.803 ==================================================================================== 00:07:13.803 Total 511552/s 1998 MiB/s 0 0' 00:07:13.803 06:46:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:13.803 06:46:17 -- accel/accel.sh@20 -- # IFS=: 00:07:13.803 06:46:17 -- accel/accel.sh@20 -- # read -r var val 00:07:13.803 06:46:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:13.803 06:46:17 -- accel/accel.sh@12 -- # build_accel_config 00:07:13.803 06:46:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:13.803 06:46:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.803 06:46:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.803 06:46:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:13.803 06:46:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:13.803 06:46:17 -- accel/accel.sh@41 -- # local IFS=, 00:07:13.803 06:46:17 -- accel/accel.sh@42 -- # jq -r . 00:07:13.803 [2024-12-13 06:46:17.998904] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:13.803 [2024-12-13 06:46:17.999005] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68376 ] 00:07:13.803 [2024-12-13 06:46:18.136268] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.803 [2024-12-13 06:46:18.171187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.803 06:46:18 -- accel/accel.sh@21 -- # val= 00:07:13.803 06:46:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.803 06:46:18 -- accel/accel.sh@20 -- # IFS=: 00:07:13.803 06:46:18 -- accel/accel.sh@20 -- # read -r var val 00:07:13.803 06:46:18 -- accel/accel.sh@21 -- # val= 00:07:13.803 06:46:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.803 06:46:18 -- accel/accel.sh@20 -- # IFS=: 00:07:13.803 06:46:18 -- accel/accel.sh@20 -- # read -r var val 00:07:13.803 06:46:18 -- accel/accel.sh@21 -- # val=0x1 00:07:13.803 06:46:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.803 06:46:18 -- accel/accel.sh@20 -- # IFS=: 00:07:13.803 06:46:18 -- accel/accel.sh@20 -- # read -r var val 00:07:13.803 06:46:18 -- accel/accel.sh@21 -- # val= 00:07:13.803 06:46:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.803 06:46:18 -- accel/accel.sh@20 -- # IFS=: 00:07:13.803 06:46:18 -- accel/accel.sh@20 -- # read -r var val 00:07:13.803 06:46:18 -- accel/accel.sh@21 -- # val= 00:07:13.803 06:46:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.803 06:46:18 -- accel/accel.sh@20 -- # IFS=: 00:07:13.803 06:46:18 -- accel/accel.sh@20 -- # read -r var val 00:07:13.803 06:46:18 -- accel/accel.sh@21 -- # val=fill 00:07:13.803 06:46:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.803 06:46:18 -- accel/accel.sh@24 -- # accel_opc=fill 00:07:13.803 06:46:18 -- accel/accel.sh@20 -- # IFS=: 00:07:13.803 06:46:18 -- accel/accel.sh@20 -- # read -r var val 00:07:13.803 06:46:18 -- accel/accel.sh@21 -- # val=0x80 00:07:13.803 06:46:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.803 06:46:18 -- accel/accel.sh@20 -- # IFS=: 00:07:13.803 06:46:18 -- accel/accel.sh@20 -- # read -r var val 00:07:13.803 06:46:18 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:13.803 06:46:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.803 06:46:18 -- accel/accel.sh@20 -- # IFS=: 00:07:13.803 06:46:18 -- accel/accel.sh@20 -- # read -r var val 00:07:13.803 06:46:18 -- accel/accel.sh@21 -- # val= 00:07:13.803 06:46:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.803 06:46:18 -- accel/accel.sh@20 -- # IFS=: 00:07:13.803 06:46:18 -- accel/accel.sh@20 -- # read -r var val 00:07:13.803 06:46:18 -- accel/accel.sh@21 -- # val=software 00:07:13.803 06:46:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.803 06:46:18 -- accel/accel.sh@23 -- # accel_module=software 00:07:13.803 06:46:18 -- accel/accel.sh@20 -- # IFS=: 00:07:13.803 06:46:18 -- accel/accel.sh@20 -- # read -r var val 00:07:13.803 06:46:18 -- accel/accel.sh@21 -- # val=64 00:07:13.803 06:46:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.803 06:46:18 -- accel/accel.sh@20 -- # IFS=: 00:07:13.803 06:46:18 -- accel/accel.sh@20 -- # read -r var val 00:07:13.803 06:46:18 -- accel/accel.sh@21 -- # val=64 00:07:13.803 06:46:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.803 06:46:18 -- accel/accel.sh@20 -- # IFS=: 00:07:13.803 06:46:18 -- accel/accel.sh@20 -- # read -r var val 00:07:13.803 06:46:18 -- accel/accel.sh@21 -- # val=1 00:07:13.803 06:46:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.803 06:46:18 -- accel/accel.sh@20 -- # IFS=: 00:07:13.803 06:46:18 -- accel/accel.sh@20 -- # read -r var val 00:07:13.803 06:46:18 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:13.803 06:46:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.803 06:46:18 -- accel/accel.sh@20 -- # IFS=: 00:07:13.803 06:46:18 -- accel/accel.sh@20 -- # read -r var val 00:07:13.803 06:46:18 -- accel/accel.sh@21 -- # val=Yes 00:07:13.803 06:46:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.803 06:46:18 -- accel/accel.sh@20 -- # IFS=: 00:07:13.803 06:46:18 -- accel/accel.sh@20 -- # read -r var val 00:07:13.803 06:46:18 -- accel/accel.sh@21 -- # val= 00:07:13.803 06:46:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.803 06:46:18 -- accel/accel.sh@20 -- # IFS=: 00:07:13.803 06:46:18 -- accel/accel.sh@20 -- # read -r var val 00:07:13.803 06:46:18 -- accel/accel.sh@21 -- # val= 00:07:13.803 06:46:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.803 06:46:18 -- accel/accel.sh@20 -- # IFS=: 00:07:13.803 06:46:18 -- accel/accel.sh@20 -- # read -r var val 00:07:15.181 06:46:19 -- accel/accel.sh@21 -- # val= 00:07:15.181 06:46:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.181 06:46:19 -- accel/accel.sh@20 -- # IFS=: 00:07:15.181 06:46:19 -- accel/accel.sh@20 -- # read -r var val 00:07:15.181 06:46:19 -- accel/accel.sh@21 -- # val= 00:07:15.181 06:46:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.181 06:46:19 -- accel/accel.sh@20 -- # IFS=: 00:07:15.181 06:46:19 -- accel/accel.sh@20 -- # read -r var val 00:07:15.181 06:46:19 -- accel/accel.sh@21 -- # val= 00:07:15.181 06:46:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.181 06:46:19 -- accel/accel.sh@20 -- # IFS=: 00:07:15.181 06:46:19 -- accel/accel.sh@20 -- # read -r var val 00:07:15.181 06:46:19 -- accel/accel.sh@21 -- # val= 00:07:15.181 06:46:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.181 06:46:19 -- accel/accel.sh@20 -- # IFS=: 00:07:15.181 06:46:19 -- accel/accel.sh@20 -- # read -r var val 00:07:15.181 06:46:19 -- accel/accel.sh@21 -- # val= 00:07:15.181 06:46:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.181 06:46:19 -- accel/accel.sh@20 -- # IFS=: 00:07:15.181 06:46:19 -- accel/accel.sh@20 -- # read -r var val 00:07:15.181 06:46:19 -- accel/accel.sh@21 -- # val= 00:07:15.181 06:46:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.181 06:46:19 -- accel/accel.sh@20 -- # IFS=: 00:07:15.181 06:46:19 -- accel/accel.sh@20 -- # read -r var val 00:07:15.181 06:46:19 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:15.181 06:46:19 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:07:15.181 06:46:19 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:15.181 00:07:15.181 real 0m2.655s 00:07:15.181 user 0m2.304s 00:07:15.181 sys 0m0.149s 00:07:15.181 06:46:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:15.181 06:46:19 -- common/autotest_common.sh@10 -- # set +x 00:07:15.181 ************************************ 00:07:15.181 END TEST accel_fill 00:07:15.181 ************************************ 00:07:15.181 06:46:19 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:15.181 06:46:19 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:15.181 06:46:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:15.181 06:46:19 -- common/autotest_common.sh@10 -- # set +x 00:07:15.181 ************************************ 00:07:15.181 START TEST accel_copy_crc32c 00:07:15.181 ************************************ 00:07:15.181 06:46:19 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y 00:07:15.181 06:46:19 -- accel/accel.sh@16 -- # local accel_opc 00:07:15.181 06:46:19 -- accel/accel.sh@17 -- # local accel_module 00:07:15.181 06:46:19 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:15.181 06:46:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:15.181 06:46:19 -- accel/accel.sh@12 -- # build_accel_config 00:07:15.181 06:46:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:15.181 06:46:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.181 06:46:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.181 06:46:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:15.181 06:46:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:15.181 06:46:19 -- accel/accel.sh@41 -- # local IFS=, 00:07:15.181 06:46:19 -- accel/accel.sh@42 -- # jq -r . 00:07:15.181 [2024-12-13 06:46:19.380987] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:15.181 [2024-12-13 06:46:19.381270] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68405 ] 00:07:15.181 [2024-12-13 06:46:19.517287] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.181 [2024-12-13 06:46:19.548644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.559 06:46:20 -- accel/accel.sh@18 -- # out=' 00:07:16.559 SPDK Configuration: 00:07:16.559 Core mask: 0x1 00:07:16.559 00:07:16.559 Accel Perf Configuration: 00:07:16.559 Workload Type: copy_crc32c 00:07:16.559 CRC-32C seed: 0 00:07:16.559 Vector size: 4096 bytes 00:07:16.559 Transfer size: 4096 bytes 00:07:16.559 Vector count 1 00:07:16.559 Module: software 00:07:16.559 Queue depth: 32 00:07:16.559 Allocate depth: 32 00:07:16.559 # threads/core: 1 00:07:16.559 Run time: 1 seconds 00:07:16.559 Verify: Yes 00:07:16.559 00:07:16.559 Running for 1 seconds... 00:07:16.559 00:07:16.559 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:16.559 ------------------------------------------------------------------------------------ 00:07:16.559 0,0 288672/s 1127 MiB/s 0 0 00:07:16.559 ==================================================================================== 00:07:16.559 Total 288672/s 1127 MiB/s 0 0' 00:07:16.559 06:46:20 -- accel/accel.sh@20 -- # IFS=: 00:07:16.559 06:46:20 -- accel/accel.sh@20 -- # read -r var val 00:07:16.559 06:46:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:16.559 06:46:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:16.559 06:46:20 -- accel/accel.sh@12 -- # build_accel_config 00:07:16.559 06:46:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:16.559 06:46:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.559 06:46:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.559 06:46:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:16.559 06:46:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:16.559 06:46:20 -- accel/accel.sh@41 -- # local IFS=, 00:07:16.559 06:46:20 -- accel/accel.sh@42 -- # jq -r . 00:07:16.559 [2024-12-13 06:46:20.690811] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:16.559 [2024-12-13 06:46:20.690903] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68419 ] 00:07:16.559 [2024-12-13 06:46:20.828163] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.559 [2024-12-13 06:46:20.858286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.559 06:46:20 -- accel/accel.sh@21 -- # val= 00:07:16.559 06:46:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.559 06:46:20 -- accel/accel.sh@20 -- # IFS=: 00:07:16.559 06:46:20 -- accel/accel.sh@20 -- # read -r var val 00:07:16.559 06:46:20 -- accel/accel.sh@21 -- # val= 00:07:16.559 06:46:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.559 06:46:20 -- accel/accel.sh@20 -- # IFS=: 00:07:16.559 06:46:20 -- accel/accel.sh@20 -- # read -r var val 00:07:16.559 06:46:20 -- accel/accel.sh@21 -- # val=0x1 00:07:16.559 06:46:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.559 06:46:20 -- accel/accel.sh@20 -- # IFS=: 00:07:16.559 06:46:20 -- accel/accel.sh@20 -- # read -r var val 00:07:16.559 06:46:20 -- accel/accel.sh@21 -- # val= 00:07:16.559 06:46:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.559 06:46:20 -- accel/accel.sh@20 -- # IFS=: 00:07:16.559 06:46:20 -- accel/accel.sh@20 -- # read -r var val 00:07:16.559 06:46:20 -- accel/accel.sh@21 -- # val= 00:07:16.559 06:46:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.559 06:46:20 -- accel/accel.sh@20 -- # IFS=: 00:07:16.559 06:46:20 -- accel/accel.sh@20 -- # read -r var val 00:07:16.559 06:46:20 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:16.559 06:46:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.559 06:46:20 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:16.559 06:46:20 -- accel/accel.sh@20 -- # IFS=: 00:07:16.559 06:46:20 -- accel/accel.sh@20 -- # read -r var val 00:07:16.559 06:46:20 -- accel/accel.sh@21 -- # val=0 00:07:16.559 06:46:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.559 06:46:20 -- accel/accel.sh@20 -- # IFS=: 00:07:16.559 06:46:20 -- accel/accel.sh@20 -- # read -r var val 00:07:16.559 06:46:20 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:16.559 06:46:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.559 06:46:20 -- accel/accel.sh@20 -- # IFS=: 00:07:16.559 06:46:20 -- accel/accel.sh@20 -- # read -r var val 00:07:16.559 06:46:20 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:16.559 06:46:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.559 06:46:20 -- accel/accel.sh@20 -- # IFS=: 00:07:16.559 06:46:20 -- accel/accel.sh@20 -- # read -r var val 00:07:16.559 06:46:20 -- accel/accel.sh@21 -- # val= 00:07:16.559 06:46:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.559 06:46:20 -- accel/accel.sh@20 -- # IFS=: 00:07:16.559 06:46:20 -- accel/accel.sh@20 -- # read -r var val 00:07:16.559 06:46:20 -- accel/accel.sh@21 -- # val=software 00:07:16.559 06:46:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.559 06:46:20 -- accel/accel.sh@23 -- # accel_module=software 00:07:16.559 06:46:20 -- accel/accel.sh@20 -- # IFS=: 00:07:16.559 06:46:20 -- accel/accel.sh@20 -- # read -r var val 00:07:16.559 06:46:20 -- accel/accel.sh@21 -- # val=32 00:07:16.559 06:46:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.559 06:46:20 -- accel/accel.sh@20 -- # IFS=: 00:07:16.559 06:46:20 -- accel/accel.sh@20 -- # read -r var val 00:07:16.559 06:46:20 -- accel/accel.sh@21 -- # val=32 00:07:16.559 06:46:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.559 06:46:20 -- accel/accel.sh@20 -- # IFS=: 00:07:16.559 06:46:20 -- accel/accel.sh@20 -- # read -r var val 00:07:16.559 06:46:20 -- accel/accel.sh@21 -- # val=1 00:07:16.559 06:46:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.559 06:46:20 -- accel/accel.sh@20 -- # IFS=: 00:07:16.559 06:46:20 -- accel/accel.sh@20 -- # read -r var val 00:07:16.559 06:46:20 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:16.559 06:46:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.559 06:46:20 -- accel/accel.sh@20 -- # IFS=: 00:07:16.559 06:46:20 -- accel/accel.sh@20 -- # read -r var val 00:07:16.559 06:46:20 -- accel/accel.sh@21 -- # val=Yes 00:07:16.559 06:46:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.559 06:46:20 -- accel/accel.sh@20 -- # IFS=: 00:07:16.559 06:46:20 -- accel/accel.sh@20 -- # read -r var val 00:07:16.559 06:46:20 -- accel/accel.sh@21 -- # val= 00:07:16.559 06:46:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.559 06:46:20 -- accel/accel.sh@20 -- # IFS=: 00:07:16.559 06:46:20 -- accel/accel.sh@20 -- # read -r var val 00:07:16.559 06:46:20 -- accel/accel.sh@21 -- # val= 00:07:16.559 06:46:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.559 06:46:20 -- accel/accel.sh@20 -- # IFS=: 00:07:16.559 06:46:20 -- accel/accel.sh@20 -- # read -r var val 00:07:17.496 06:46:21 -- accel/accel.sh@21 -- # val= 00:07:17.496 06:46:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.496 06:46:21 -- accel/accel.sh@20 -- # IFS=: 00:07:17.496 06:46:21 -- accel/accel.sh@20 -- # read -r var val 00:07:17.496 06:46:21 -- accel/accel.sh@21 -- # val= 00:07:17.496 06:46:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.496 06:46:21 -- accel/accel.sh@20 -- # IFS=: 00:07:17.496 06:46:21 -- accel/accel.sh@20 -- # read -r var val 00:07:17.496 06:46:21 -- accel/accel.sh@21 -- # val= 00:07:17.496 06:46:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.496 06:46:21 -- accel/accel.sh@20 -- # IFS=: 00:07:17.496 06:46:21 -- accel/accel.sh@20 -- # read -r var val 00:07:17.496 06:46:21 -- accel/accel.sh@21 -- # val= 00:07:17.496 06:46:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.496 06:46:21 -- accel/accel.sh@20 -- # IFS=: 00:07:17.496 06:46:21 -- accel/accel.sh@20 -- # read -r var val 00:07:17.496 06:46:21 -- accel/accel.sh@21 -- # val= 00:07:17.496 06:46:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.496 06:46:21 -- accel/accel.sh@20 -- # IFS=: 00:07:17.496 06:46:21 -- accel/accel.sh@20 -- # read -r var val 00:07:17.496 06:46:21 -- accel/accel.sh@21 -- # val= 00:07:17.496 06:46:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.496 06:46:21 -- accel/accel.sh@20 -- # IFS=: 00:07:17.496 06:46:21 -- accel/accel.sh@20 -- # read -r var val 00:07:17.496 06:46:21 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:17.496 06:46:21 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:17.496 06:46:21 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:17.496 00:07:17.496 real 0m2.622s 00:07:17.496 user 0m2.287s 00:07:17.496 sys 0m0.134s 00:07:17.496 06:46:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:17.496 06:46:21 -- common/autotest_common.sh@10 -- # set +x 00:07:17.496 ************************************ 00:07:17.496 END TEST accel_copy_crc32c 00:07:17.496 ************************************ 00:07:17.754 06:46:22 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:17.754 06:46:22 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:17.754 06:46:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:17.754 06:46:22 -- common/autotest_common.sh@10 -- # set +x 00:07:17.754 ************************************ 00:07:17.754 START TEST accel_copy_crc32c_C2 00:07:17.754 ************************************ 00:07:17.754 06:46:22 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:17.754 06:46:22 -- accel/accel.sh@16 -- # local accel_opc 00:07:17.754 06:46:22 -- accel/accel.sh@17 -- # local accel_module 00:07:17.754 06:46:22 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:17.754 06:46:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:17.754 06:46:22 -- accel/accel.sh@12 -- # build_accel_config 00:07:17.754 06:46:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:17.754 06:46:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.754 06:46:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.754 06:46:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:17.755 06:46:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:17.755 06:46:22 -- accel/accel.sh@41 -- # local IFS=, 00:07:17.755 06:46:22 -- accel/accel.sh@42 -- # jq -r . 00:07:17.755 [2024-12-13 06:46:22.050883] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:17.755 [2024-12-13 06:46:22.050981] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68454 ] 00:07:17.755 [2024-12-13 06:46:22.183781] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.755 [2024-12-13 06:46:22.214382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.133 06:46:23 -- accel/accel.sh@18 -- # out=' 00:07:19.133 SPDK Configuration: 00:07:19.133 Core mask: 0x1 00:07:19.133 00:07:19.133 Accel Perf Configuration: 00:07:19.133 Workload Type: copy_crc32c 00:07:19.133 CRC-32C seed: 0 00:07:19.133 Vector size: 4096 bytes 00:07:19.133 Transfer size: 8192 bytes 00:07:19.133 Vector count 2 00:07:19.133 Module: software 00:07:19.133 Queue depth: 32 00:07:19.133 Allocate depth: 32 00:07:19.133 # threads/core: 1 00:07:19.133 Run time: 1 seconds 00:07:19.133 Verify: Yes 00:07:19.133 00:07:19.133 Running for 1 seconds... 00:07:19.133 00:07:19.133 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:19.133 ------------------------------------------------------------------------------------ 00:07:19.133 0,0 207648/s 1622 MiB/s 0 0 00:07:19.133 ==================================================================================== 00:07:19.133 Total 207648/s 811 MiB/s 0 0' 00:07:19.133 06:46:23 -- accel/accel.sh@20 -- # IFS=: 00:07:19.133 06:46:23 -- accel/accel.sh@20 -- # read -r var val 00:07:19.133 06:46:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:19.133 06:46:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:19.133 06:46:23 -- accel/accel.sh@12 -- # build_accel_config 00:07:19.133 06:46:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:19.133 06:46:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.133 06:46:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.133 06:46:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:19.133 06:46:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:19.133 06:46:23 -- accel/accel.sh@41 -- # local IFS=, 00:07:19.133 06:46:23 -- accel/accel.sh@42 -- # jq -r . 00:07:19.133 [2024-12-13 06:46:23.354982] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:19.133 [2024-12-13 06:46:23.355072] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68473 ] 00:07:19.133 [2024-12-13 06:46:23.483383] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.133 [2024-12-13 06:46:23.513792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.133 06:46:23 -- accel/accel.sh@21 -- # val= 00:07:19.133 06:46:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.133 06:46:23 -- accel/accel.sh@20 -- # IFS=: 00:07:19.133 06:46:23 -- accel/accel.sh@20 -- # read -r var val 00:07:19.133 06:46:23 -- accel/accel.sh@21 -- # val= 00:07:19.133 06:46:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.133 06:46:23 -- accel/accel.sh@20 -- # IFS=: 00:07:19.133 06:46:23 -- accel/accel.sh@20 -- # read -r var val 00:07:19.133 06:46:23 -- accel/accel.sh@21 -- # val=0x1 00:07:19.133 06:46:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.133 06:46:23 -- accel/accel.sh@20 -- # IFS=: 00:07:19.133 06:46:23 -- accel/accel.sh@20 -- # read -r var val 00:07:19.133 06:46:23 -- accel/accel.sh@21 -- # val= 00:07:19.133 06:46:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.133 06:46:23 -- accel/accel.sh@20 -- # IFS=: 00:07:19.133 06:46:23 -- accel/accel.sh@20 -- # read -r var val 00:07:19.133 06:46:23 -- accel/accel.sh@21 -- # val= 00:07:19.133 06:46:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.133 06:46:23 -- accel/accel.sh@20 -- # IFS=: 00:07:19.133 06:46:23 -- accel/accel.sh@20 -- # read -r var val 00:07:19.133 06:46:23 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:19.133 06:46:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.133 06:46:23 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:19.133 06:46:23 -- accel/accel.sh@20 -- # IFS=: 00:07:19.133 06:46:23 -- accel/accel.sh@20 -- # read -r var val 00:07:19.133 06:46:23 -- accel/accel.sh@21 -- # val=0 00:07:19.133 06:46:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.133 06:46:23 -- accel/accel.sh@20 -- # IFS=: 00:07:19.133 06:46:23 -- accel/accel.sh@20 -- # read -r var val 00:07:19.133 06:46:23 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:19.133 06:46:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.133 06:46:23 -- accel/accel.sh@20 -- # IFS=: 00:07:19.133 06:46:23 -- accel/accel.sh@20 -- # read -r var val 00:07:19.133 06:46:23 -- accel/accel.sh@21 -- # val='8192 bytes' 00:07:19.133 06:46:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.133 06:46:23 -- accel/accel.sh@20 -- # IFS=: 00:07:19.133 06:46:23 -- accel/accel.sh@20 -- # read -r var val 00:07:19.133 06:46:23 -- accel/accel.sh@21 -- # val= 00:07:19.133 06:46:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.133 06:46:23 -- accel/accel.sh@20 -- # IFS=: 00:07:19.133 06:46:23 -- accel/accel.sh@20 -- # read -r var val 00:07:19.133 06:46:23 -- accel/accel.sh@21 -- # val=software 00:07:19.133 06:46:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.133 06:46:23 -- accel/accel.sh@23 -- # accel_module=software 00:07:19.133 06:46:23 -- accel/accel.sh@20 -- # IFS=: 00:07:19.133 06:46:23 -- accel/accel.sh@20 -- # read -r var val 00:07:19.133 06:46:23 -- accel/accel.sh@21 -- # val=32 00:07:19.133 06:46:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.133 06:46:23 -- accel/accel.sh@20 -- # IFS=: 00:07:19.133 06:46:23 -- accel/accel.sh@20 -- # read -r var val 00:07:19.133 06:46:23 -- accel/accel.sh@21 -- # val=32 00:07:19.133 06:46:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.133 06:46:23 -- accel/accel.sh@20 -- # IFS=: 00:07:19.133 06:46:23 -- accel/accel.sh@20 -- # read -r var val 00:07:19.133 06:46:23 -- accel/accel.sh@21 -- # val=1 00:07:19.133 06:46:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.133 06:46:23 -- accel/accel.sh@20 -- # IFS=: 00:07:19.133 06:46:23 -- accel/accel.sh@20 -- # read -r var val 00:07:19.133 06:46:23 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:19.133 06:46:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.133 06:46:23 -- accel/accel.sh@20 -- # IFS=: 00:07:19.133 06:46:23 -- accel/accel.sh@20 -- # read -r var val 00:07:19.133 06:46:23 -- accel/accel.sh@21 -- # val=Yes 00:07:19.133 06:46:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.133 06:46:23 -- accel/accel.sh@20 -- # IFS=: 00:07:19.133 06:46:23 -- accel/accel.sh@20 -- # read -r var val 00:07:19.134 06:46:23 -- accel/accel.sh@21 -- # val= 00:07:19.134 06:46:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.134 06:46:23 -- accel/accel.sh@20 -- # IFS=: 00:07:19.134 06:46:23 -- accel/accel.sh@20 -- # read -r var val 00:07:19.134 06:46:23 -- accel/accel.sh@21 -- # val= 00:07:19.134 06:46:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.134 06:46:23 -- accel/accel.sh@20 -- # IFS=: 00:07:19.134 06:46:23 -- accel/accel.sh@20 -- # read -r var val 00:07:20.513 06:46:24 -- accel/accel.sh@21 -- # val= 00:07:20.513 06:46:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.513 06:46:24 -- accel/accel.sh@20 -- # IFS=: 00:07:20.513 06:46:24 -- accel/accel.sh@20 -- # read -r var val 00:07:20.513 06:46:24 -- accel/accel.sh@21 -- # val= 00:07:20.513 06:46:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.513 06:46:24 -- accel/accel.sh@20 -- # IFS=: 00:07:20.513 06:46:24 -- accel/accel.sh@20 -- # read -r var val 00:07:20.513 06:46:24 -- accel/accel.sh@21 -- # val= 00:07:20.513 06:46:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.513 06:46:24 -- accel/accel.sh@20 -- # IFS=: 00:07:20.513 06:46:24 -- accel/accel.sh@20 -- # read -r var val 00:07:20.513 06:46:24 -- accel/accel.sh@21 -- # val= 00:07:20.513 06:46:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.513 06:46:24 -- accel/accel.sh@20 -- # IFS=: 00:07:20.513 06:46:24 -- accel/accel.sh@20 -- # read -r var val 00:07:20.513 06:46:24 -- accel/accel.sh@21 -- # val= 00:07:20.513 06:46:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.514 06:46:24 -- accel/accel.sh@20 -- # IFS=: 00:07:20.514 06:46:24 -- accel/accel.sh@20 -- # read -r var val 00:07:20.514 06:46:24 -- accel/accel.sh@21 -- # val= 00:07:20.514 06:46:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.514 06:46:24 -- accel/accel.sh@20 -- # IFS=: 00:07:20.514 06:46:24 -- accel/accel.sh@20 -- # read -r var val 00:07:20.514 06:46:24 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:20.514 06:46:24 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:20.514 06:46:24 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:20.514 00:07:20.514 real 0m2.604s 00:07:20.514 user 0m2.269s 00:07:20.514 sys 0m0.138s 00:07:20.514 06:46:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:20.514 06:46:24 -- common/autotest_common.sh@10 -- # set +x 00:07:20.514 ************************************ 00:07:20.514 END TEST accel_copy_crc32c_C2 00:07:20.514 ************************************ 00:07:20.514 06:46:24 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:20.514 06:46:24 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:20.514 06:46:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:20.514 06:46:24 -- common/autotest_common.sh@10 -- # set +x 00:07:20.514 ************************************ 00:07:20.514 START TEST accel_dualcast 00:07:20.514 ************************************ 00:07:20.514 06:46:24 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y 00:07:20.514 06:46:24 -- accel/accel.sh@16 -- # local accel_opc 00:07:20.514 06:46:24 -- accel/accel.sh@17 -- # local accel_module 00:07:20.514 06:46:24 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:07:20.514 06:46:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:20.514 06:46:24 -- accel/accel.sh@12 -- # build_accel_config 00:07:20.514 06:46:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:20.514 06:46:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.514 06:46:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.514 06:46:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:20.514 06:46:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:20.514 06:46:24 -- accel/accel.sh@41 -- # local IFS=, 00:07:20.514 06:46:24 -- accel/accel.sh@42 -- # jq -r . 00:07:20.514 [2024-12-13 06:46:24.719727] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:20.514 [2024-12-13 06:46:24.719961] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68502 ] 00:07:20.514 [2024-12-13 06:46:24.852359] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.514 [2024-12-13 06:46:24.886318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.890 06:46:26 -- accel/accel.sh@18 -- # out=' 00:07:21.890 SPDK Configuration: 00:07:21.890 Core mask: 0x1 00:07:21.890 00:07:21.890 Accel Perf Configuration: 00:07:21.890 Workload Type: dualcast 00:07:21.890 Transfer size: 4096 bytes 00:07:21.890 Vector count 1 00:07:21.890 Module: software 00:07:21.890 Queue depth: 32 00:07:21.890 Allocate depth: 32 00:07:21.890 # threads/core: 1 00:07:21.890 Run time: 1 seconds 00:07:21.890 Verify: Yes 00:07:21.890 00:07:21.890 Running for 1 seconds... 00:07:21.890 00:07:21.890 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:21.890 ------------------------------------------------------------------------------------ 00:07:21.890 0,0 399712/s 1561 MiB/s 0 0 00:07:21.890 ==================================================================================== 00:07:21.890 Total 399712/s 1561 MiB/s 0 0' 00:07:21.890 06:46:26 -- accel/accel.sh@20 -- # IFS=: 00:07:21.890 06:46:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:21.890 06:46:26 -- accel/accel.sh@20 -- # read -r var val 00:07:21.890 06:46:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:21.890 06:46:26 -- accel/accel.sh@12 -- # build_accel_config 00:07:21.890 06:46:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:21.890 06:46:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.890 06:46:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.890 06:46:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:21.890 06:46:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:21.890 06:46:26 -- accel/accel.sh@41 -- # local IFS=, 00:07:21.890 06:46:26 -- accel/accel.sh@42 -- # jq -r . 00:07:21.890 [2024-12-13 06:46:26.029927] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:21.890 [2024-12-13 06:46:26.030017] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68522 ] 00:07:21.890 [2024-12-13 06:46:26.161860] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.890 [2024-12-13 06:46:26.192492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.890 06:46:26 -- accel/accel.sh@21 -- # val= 00:07:21.890 06:46:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.890 06:46:26 -- accel/accel.sh@20 -- # IFS=: 00:07:21.890 06:46:26 -- accel/accel.sh@20 -- # read -r var val 00:07:21.890 06:46:26 -- accel/accel.sh@21 -- # val= 00:07:21.890 06:46:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.890 06:46:26 -- accel/accel.sh@20 -- # IFS=: 00:07:21.890 06:46:26 -- accel/accel.sh@20 -- # read -r var val 00:07:21.890 06:46:26 -- accel/accel.sh@21 -- # val=0x1 00:07:21.890 06:46:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.890 06:46:26 -- accel/accel.sh@20 -- # IFS=: 00:07:21.890 06:46:26 -- accel/accel.sh@20 -- # read -r var val 00:07:21.890 06:46:26 -- accel/accel.sh@21 -- # val= 00:07:21.890 06:46:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.890 06:46:26 -- accel/accel.sh@20 -- # IFS=: 00:07:21.890 06:46:26 -- accel/accel.sh@20 -- # read -r var val 00:07:21.890 06:46:26 -- accel/accel.sh@21 -- # val= 00:07:21.890 06:46:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.890 06:46:26 -- accel/accel.sh@20 -- # IFS=: 00:07:21.890 06:46:26 -- accel/accel.sh@20 -- # read -r var val 00:07:21.890 06:46:26 -- accel/accel.sh@21 -- # val=dualcast 00:07:21.890 06:46:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.890 06:46:26 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:07:21.890 06:46:26 -- accel/accel.sh@20 -- # IFS=: 00:07:21.890 06:46:26 -- accel/accel.sh@20 -- # read -r var val 00:07:21.890 06:46:26 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:21.890 06:46:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.890 06:46:26 -- accel/accel.sh@20 -- # IFS=: 00:07:21.890 06:46:26 -- accel/accel.sh@20 -- # read -r var val 00:07:21.890 06:46:26 -- accel/accel.sh@21 -- # val= 00:07:21.890 06:46:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.890 06:46:26 -- accel/accel.sh@20 -- # IFS=: 00:07:21.890 06:46:26 -- accel/accel.sh@20 -- # read -r var val 00:07:21.890 06:46:26 -- accel/accel.sh@21 -- # val=software 00:07:21.890 06:46:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.890 06:46:26 -- accel/accel.sh@23 -- # accel_module=software 00:07:21.890 06:46:26 -- accel/accel.sh@20 -- # IFS=: 00:07:21.890 06:46:26 -- accel/accel.sh@20 -- # read -r var val 00:07:21.890 06:46:26 -- accel/accel.sh@21 -- # val=32 00:07:21.890 06:46:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.890 06:46:26 -- accel/accel.sh@20 -- # IFS=: 00:07:21.890 06:46:26 -- accel/accel.sh@20 -- # read -r var val 00:07:21.890 06:46:26 -- accel/accel.sh@21 -- # val=32 00:07:21.890 06:46:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.890 06:46:26 -- accel/accel.sh@20 -- # IFS=: 00:07:21.890 06:46:26 -- accel/accel.sh@20 -- # read -r var val 00:07:21.890 06:46:26 -- accel/accel.sh@21 -- # val=1 00:07:21.890 06:46:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.890 06:46:26 -- accel/accel.sh@20 -- # IFS=: 00:07:21.890 06:46:26 -- accel/accel.sh@20 -- # read -r var val 00:07:21.890 06:46:26 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:21.890 06:46:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.890 06:46:26 -- accel/accel.sh@20 -- # IFS=: 00:07:21.890 06:46:26 -- accel/accel.sh@20 -- # read -r var val 00:07:21.890 06:46:26 -- accel/accel.sh@21 -- # val=Yes 00:07:21.890 06:46:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.890 06:46:26 -- accel/accel.sh@20 -- # IFS=: 00:07:21.890 06:46:26 -- accel/accel.sh@20 -- # read -r var val 00:07:21.890 06:46:26 -- accel/accel.sh@21 -- # val= 00:07:21.890 06:46:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.890 06:46:26 -- accel/accel.sh@20 -- # IFS=: 00:07:21.890 06:46:26 -- accel/accel.sh@20 -- # read -r var val 00:07:21.890 06:46:26 -- accel/accel.sh@21 -- # val= 00:07:21.890 06:46:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.890 06:46:26 -- accel/accel.sh@20 -- # IFS=: 00:07:21.890 06:46:26 -- accel/accel.sh@20 -- # read -r var val 00:07:22.827 06:46:27 -- accel/accel.sh@21 -- # val= 00:07:22.827 06:46:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.827 06:46:27 -- accel/accel.sh@20 -- # IFS=: 00:07:22.827 06:46:27 -- accel/accel.sh@20 -- # read -r var val 00:07:22.827 06:46:27 -- accel/accel.sh@21 -- # val= 00:07:22.827 06:46:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.827 06:46:27 -- accel/accel.sh@20 -- # IFS=: 00:07:22.827 06:46:27 -- accel/accel.sh@20 -- # read -r var val 00:07:22.827 06:46:27 -- accel/accel.sh@21 -- # val= 00:07:22.827 06:46:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.827 06:46:27 -- accel/accel.sh@20 -- # IFS=: 00:07:22.827 06:46:27 -- accel/accel.sh@20 -- # read -r var val 00:07:22.827 06:46:27 -- accel/accel.sh@21 -- # val= 00:07:22.827 06:46:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.827 06:46:27 -- accel/accel.sh@20 -- # IFS=: 00:07:22.827 06:46:27 -- accel/accel.sh@20 -- # read -r var val 00:07:22.827 06:46:27 -- accel/accel.sh@21 -- # val= 00:07:22.827 06:46:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.827 06:46:27 -- accel/accel.sh@20 -- # IFS=: 00:07:22.827 06:46:27 -- accel/accel.sh@20 -- # read -r var val 00:07:22.827 06:46:27 -- accel/accel.sh@21 -- # val= 00:07:22.827 06:46:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.827 06:46:27 -- accel/accel.sh@20 -- # IFS=: 00:07:22.827 06:46:27 -- accel/accel.sh@20 -- # read -r var val 00:07:22.827 06:46:27 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:22.827 06:46:27 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:07:22.827 06:46:27 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:22.827 00:07:22.827 real 0m2.622s 00:07:22.827 user 0m2.285s 00:07:22.827 sys 0m0.137s 00:07:22.827 06:46:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:22.827 ************************************ 00:07:22.827 06:46:27 -- common/autotest_common.sh@10 -- # set +x 00:07:22.827 END TEST accel_dualcast 00:07:22.827 ************************************ 00:07:23.086 06:46:27 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:23.086 06:46:27 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:23.086 06:46:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:23.086 06:46:27 -- common/autotest_common.sh@10 -- # set +x 00:07:23.086 ************************************ 00:07:23.086 START TEST accel_compare 00:07:23.086 ************************************ 00:07:23.086 06:46:27 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y 00:07:23.086 06:46:27 -- accel/accel.sh@16 -- # local accel_opc 00:07:23.086 06:46:27 -- accel/accel.sh@17 -- # local accel_module 00:07:23.086 06:46:27 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:07:23.086 06:46:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:23.086 06:46:27 -- accel/accel.sh@12 -- # build_accel_config 00:07:23.086 06:46:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:23.086 06:46:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.086 06:46:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.086 06:46:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:23.086 06:46:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:23.086 06:46:27 -- accel/accel.sh@41 -- # local IFS=, 00:07:23.086 06:46:27 -- accel/accel.sh@42 -- # jq -r . 00:07:23.086 [2024-12-13 06:46:27.386691] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:23.086 [2024-12-13 06:46:27.386818] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68555 ] 00:07:23.086 [2024-12-13 06:46:27.524707] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.086 [2024-12-13 06:46:27.555474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.465 06:46:28 -- accel/accel.sh@18 -- # out=' 00:07:24.465 SPDK Configuration: 00:07:24.465 Core mask: 0x1 00:07:24.465 00:07:24.465 Accel Perf Configuration: 00:07:24.465 Workload Type: compare 00:07:24.465 Transfer size: 4096 bytes 00:07:24.465 Vector count 1 00:07:24.465 Module: software 00:07:24.465 Queue depth: 32 00:07:24.465 Allocate depth: 32 00:07:24.465 # threads/core: 1 00:07:24.465 Run time: 1 seconds 00:07:24.465 Verify: Yes 00:07:24.465 00:07:24.465 Running for 1 seconds... 00:07:24.465 00:07:24.465 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:24.465 ------------------------------------------------------------------------------------ 00:07:24.465 0,0 529472/s 2068 MiB/s 0 0 00:07:24.465 ==================================================================================== 00:07:24.465 Total 529472/s 2068 MiB/s 0 0' 00:07:24.465 06:46:28 -- accel/accel.sh@20 -- # IFS=: 00:07:24.465 06:46:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:24.465 06:46:28 -- accel/accel.sh@20 -- # read -r var val 00:07:24.465 06:46:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:24.465 06:46:28 -- accel/accel.sh@12 -- # build_accel_config 00:07:24.465 06:46:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:24.465 06:46:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.465 06:46:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.465 06:46:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:24.465 06:46:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:24.465 06:46:28 -- accel/accel.sh@41 -- # local IFS=, 00:07:24.465 06:46:28 -- accel/accel.sh@42 -- # jq -r . 00:07:24.465 [2024-12-13 06:46:28.702100] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:24.465 [2024-12-13 06:46:28.702834] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68570 ] 00:07:24.465 [2024-12-13 06:46:28.848268] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.465 [2024-12-13 06:46:28.878047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.465 06:46:28 -- accel/accel.sh@21 -- # val= 00:07:24.465 06:46:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.465 06:46:28 -- accel/accel.sh@20 -- # IFS=: 00:07:24.465 06:46:28 -- accel/accel.sh@20 -- # read -r var val 00:07:24.465 06:46:28 -- accel/accel.sh@21 -- # val= 00:07:24.465 06:46:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.465 06:46:28 -- accel/accel.sh@20 -- # IFS=: 00:07:24.465 06:46:28 -- accel/accel.sh@20 -- # read -r var val 00:07:24.465 06:46:28 -- accel/accel.sh@21 -- # val=0x1 00:07:24.465 06:46:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.465 06:46:28 -- accel/accel.sh@20 -- # IFS=: 00:07:24.465 06:46:28 -- accel/accel.sh@20 -- # read -r var val 00:07:24.465 06:46:28 -- accel/accel.sh@21 -- # val= 00:07:24.465 06:46:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.465 06:46:28 -- accel/accel.sh@20 -- # IFS=: 00:07:24.465 06:46:28 -- accel/accel.sh@20 -- # read -r var val 00:07:24.465 06:46:28 -- accel/accel.sh@21 -- # val= 00:07:24.465 06:46:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.465 06:46:28 -- accel/accel.sh@20 -- # IFS=: 00:07:24.465 06:46:28 -- accel/accel.sh@20 -- # read -r var val 00:07:24.465 06:46:28 -- accel/accel.sh@21 -- # val=compare 00:07:24.465 06:46:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.465 06:46:28 -- accel/accel.sh@24 -- # accel_opc=compare 00:07:24.465 06:46:28 -- accel/accel.sh@20 -- # IFS=: 00:07:24.465 06:46:28 -- accel/accel.sh@20 -- # read -r var val 00:07:24.465 06:46:28 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:24.465 06:46:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.465 06:46:28 -- accel/accel.sh@20 -- # IFS=: 00:07:24.465 06:46:28 -- accel/accel.sh@20 -- # read -r var val 00:07:24.465 06:46:28 -- accel/accel.sh@21 -- # val= 00:07:24.465 06:46:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.465 06:46:28 -- accel/accel.sh@20 -- # IFS=: 00:07:24.465 06:46:28 -- accel/accel.sh@20 -- # read -r var val 00:07:24.465 06:46:28 -- accel/accel.sh@21 -- # val=software 00:07:24.465 06:46:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.465 06:46:28 -- accel/accel.sh@23 -- # accel_module=software 00:07:24.465 06:46:28 -- accel/accel.sh@20 -- # IFS=: 00:07:24.465 06:46:28 -- accel/accel.sh@20 -- # read -r var val 00:07:24.465 06:46:28 -- accel/accel.sh@21 -- # val=32 00:07:24.465 06:46:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.465 06:46:28 -- accel/accel.sh@20 -- # IFS=: 00:07:24.465 06:46:28 -- accel/accel.sh@20 -- # read -r var val 00:07:24.465 06:46:28 -- accel/accel.sh@21 -- # val=32 00:07:24.465 06:46:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.465 06:46:28 -- accel/accel.sh@20 -- # IFS=: 00:07:24.465 06:46:28 -- accel/accel.sh@20 -- # read -r var val 00:07:24.465 06:46:28 -- accel/accel.sh@21 -- # val=1 00:07:24.465 06:46:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.465 06:46:28 -- accel/accel.sh@20 -- # IFS=: 00:07:24.465 06:46:28 -- accel/accel.sh@20 -- # read -r var val 00:07:24.465 06:46:28 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:24.465 06:46:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.465 06:46:28 -- accel/accel.sh@20 -- # IFS=: 00:07:24.465 06:46:28 -- accel/accel.sh@20 -- # read -r var val 00:07:24.465 06:46:28 -- accel/accel.sh@21 -- # val=Yes 00:07:24.465 06:46:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.465 06:46:28 -- accel/accel.sh@20 -- # IFS=: 00:07:24.465 06:46:28 -- accel/accel.sh@20 -- # read -r var val 00:07:24.465 06:46:28 -- accel/accel.sh@21 -- # val= 00:07:24.465 06:46:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.465 06:46:28 -- accel/accel.sh@20 -- # IFS=: 00:07:24.465 06:46:28 -- accel/accel.sh@20 -- # read -r var val 00:07:24.465 06:46:28 -- accel/accel.sh@21 -- # val= 00:07:24.465 06:46:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.465 06:46:28 -- accel/accel.sh@20 -- # IFS=: 00:07:24.465 06:46:28 -- accel/accel.sh@20 -- # read -r var val 00:07:25.861 06:46:30 -- accel/accel.sh@21 -- # val= 00:07:25.861 06:46:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.861 06:46:30 -- accel/accel.sh@20 -- # IFS=: 00:07:25.861 06:46:30 -- accel/accel.sh@20 -- # read -r var val 00:07:25.861 06:46:30 -- accel/accel.sh@21 -- # val= 00:07:25.861 06:46:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.861 06:46:30 -- accel/accel.sh@20 -- # IFS=: 00:07:25.861 06:46:30 -- accel/accel.sh@20 -- # read -r var val 00:07:25.861 06:46:30 -- accel/accel.sh@21 -- # val= 00:07:25.861 06:46:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.861 06:46:30 -- accel/accel.sh@20 -- # IFS=: 00:07:25.861 06:46:30 -- accel/accel.sh@20 -- # read -r var val 00:07:25.861 06:46:30 -- accel/accel.sh@21 -- # val= 00:07:25.861 06:46:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.861 06:46:30 -- accel/accel.sh@20 -- # IFS=: 00:07:25.861 06:46:30 -- accel/accel.sh@20 -- # read -r var val 00:07:25.861 06:46:30 -- accel/accel.sh@21 -- # val= 00:07:25.861 06:46:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.861 06:46:30 -- accel/accel.sh@20 -- # IFS=: 00:07:25.861 06:46:30 -- accel/accel.sh@20 -- # read -r var val 00:07:25.861 06:46:30 -- accel/accel.sh@21 -- # val= 00:07:25.861 06:46:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.861 06:46:30 -- accel/accel.sh@20 -- # IFS=: 00:07:25.861 06:46:30 -- accel/accel.sh@20 -- # read -r var val 00:07:25.861 06:46:30 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:25.861 06:46:30 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:07:25.861 06:46:30 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:25.861 ************************************ 00:07:25.861 END TEST accel_compare 00:07:25.861 ************************************ 00:07:25.861 00:07:25.861 real 0m2.646s 00:07:25.861 user 0m2.293s 00:07:25.861 sys 0m0.153s 00:07:25.861 06:46:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:25.861 06:46:30 -- common/autotest_common.sh@10 -- # set +x 00:07:25.861 06:46:30 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:25.861 06:46:30 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:25.861 06:46:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:25.861 06:46:30 -- common/autotest_common.sh@10 -- # set +x 00:07:25.861 ************************************ 00:07:25.861 START TEST accel_xor 00:07:25.861 ************************************ 00:07:25.861 06:46:30 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y 00:07:25.861 06:46:30 -- accel/accel.sh@16 -- # local accel_opc 00:07:25.861 06:46:30 -- accel/accel.sh@17 -- # local accel_module 00:07:25.861 06:46:30 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:07:25.861 06:46:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:25.861 06:46:30 -- accel/accel.sh@12 -- # build_accel_config 00:07:25.861 06:46:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:25.861 06:46:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.861 06:46:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.861 06:46:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:25.861 06:46:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:25.861 06:46:30 -- accel/accel.sh@41 -- # local IFS=, 00:07:25.861 06:46:30 -- accel/accel.sh@42 -- # jq -r . 00:07:25.861 [2024-12-13 06:46:30.088883] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:25.861 [2024-12-13 06:46:30.088984] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68605 ] 00:07:25.861 [2024-12-13 06:46:30.224255] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.861 [2024-12-13 06:46:30.254675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.250 06:46:31 -- accel/accel.sh@18 -- # out=' 00:07:27.250 SPDK Configuration: 00:07:27.250 Core mask: 0x1 00:07:27.250 00:07:27.250 Accel Perf Configuration: 00:07:27.250 Workload Type: xor 00:07:27.250 Source buffers: 2 00:07:27.250 Transfer size: 4096 bytes 00:07:27.250 Vector count 1 00:07:27.250 Module: software 00:07:27.250 Queue depth: 32 00:07:27.250 Allocate depth: 32 00:07:27.250 # threads/core: 1 00:07:27.250 Run time: 1 seconds 00:07:27.250 Verify: Yes 00:07:27.250 00:07:27.250 Running for 1 seconds... 00:07:27.250 00:07:27.250 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:27.250 ------------------------------------------------------------------------------------ 00:07:27.250 0,0 278688/s 1088 MiB/s 0 0 00:07:27.250 ==================================================================================== 00:07:27.250 Total 278688/s 1088 MiB/s 0 0' 00:07:27.250 06:46:31 -- accel/accel.sh@20 -- # IFS=: 00:07:27.250 06:46:31 -- accel/accel.sh@20 -- # read -r var val 00:07:27.250 06:46:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:27.250 06:46:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:27.250 06:46:31 -- accel/accel.sh@12 -- # build_accel_config 00:07:27.250 06:46:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:27.250 06:46:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.250 06:46:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.250 06:46:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:27.250 06:46:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:27.250 06:46:31 -- accel/accel.sh@41 -- # local IFS=, 00:07:27.250 06:46:31 -- accel/accel.sh@42 -- # jq -r . 00:07:27.250 [2024-12-13 06:46:31.396734] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:27.250 [2024-12-13 06:46:31.396997] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68619 ] 00:07:27.250 [2024-12-13 06:46:31.532466] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.250 [2024-12-13 06:46:31.567831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.250 06:46:31 -- accel/accel.sh@21 -- # val= 00:07:27.250 06:46:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.250 06:46:31 -- accel/accel.sh@20 -- # IFS=: 00:07:27.250 06:46:31 -- accel/accel.sh@20 -- # read -r var val 00:07:27.250 06:46:31 -- accel/accel.sh@21 -- # val= 00:07:27.250 06:46:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.250 06:46:31 -- accel/accel.sh@20 -- # IFS=: 00:07:27.250 06:46:31 -- accel/accel.sh@20 -- # read -r var val 00:07:27.250 06:46:31 -- accel/accel.sh@21 -- # val=0x1 00:07:27.250 06:46:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.250 06:46:31 -- accel/accel.sh@20 -- # IFS=: 00:07:27.250 06:46:31 -- accel/accel.sh@20 -- # read -r var val 00:07:27.250 06:46:31 -- accel/accel.sh@21 -- # val= 00:07:27.250 06:46:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.250 06:46:31 -- accel/accel.sh@20 -- # IFS=: 00:07:27.250 06:46:31 -- accel/accel.sh@20 -- # read -r var val 00:07:27.250 06:46:31 -- accel/accel.sh@21 -- # val= 00:07:27.250 06:46:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.250 06:46:31 -- accel/accel.sh@20 -- # IFS=: 00:07:27.250 06:46:31 -- accel/accel.sh@20 -- # read -r var val 00:07:27.250 06:46:31 -- accel/accel.sh@21 -- # val=xor 00:07:27.250 06:46:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.250 06:46:31 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:27.250 06:46:31 -- accel/accel.sh@20 -- # IFS=: 00:07:27.250 06:46:31 -- accel/accel.sh@20 -- # read -r var val 00:07:27.250 06:46:31 -- accel/accel.sh@21 -- # val=2 00:07:27.250 06:46:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.250 06:46:31 -- accel/accel.sh@20 -- # IFS=: 00:07:27.250 06:46:31 -- accel/accel.sh@20 -- # read -r var val 00:07:27.250 06:46:31 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:27.250 06:46:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.250 06:46:31 -- accel/accel.sh@20 -- # IFS=: 00:07:27.250 06:46:31 -- accel/accel.sh@20 -- # read -r var val 00:07:27.250 06:46:31 -- accel/accel.sh@21 -- # val= 00:07:27.250 06:46:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.250 06:46:31 -- accel/accel.sh@20 -- # IFS=: 00:07:27.250 06:46:31 -- accel/accel.sh@20 -- # read -r var val 00:07:27.250 06:46:31 -- accel/accel.sh@21 -- # val=software 00:07:27.250 06:46:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.250 06:46:31 -- accel/accel.sh@23 -- # accel_module=software 00:07:27.250 06:46:31 -- accel/accel.sh@20 -- # IFS=: 00:07:27.250 06:46:31 -- accel/accel.sh@20 -- # read -r var val 00:07:27.250 06:46:31 -- accel/accel.sh@21 -- # val=32 00:07:27.250 06:46:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.250 06:46:31 -- accel/accel.sh@20 -- # IFS=: 00:07:27.250 06:46:31 -- accel/accel.sh@20 -- # read -r var val 00:07:27.250 06:46:31 -- accel/accel.sh@21 -- # val=32 00:07:27.250 06:46:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.250 06:46:31 -- accel/accel.sh@20 -- # IFS=: 00:07:27.250 06:46:31 -- accel/accel.sh@20 -- # read -r var val 00:07:27.250 06:46:31 -- accel/accel.sh@21 -- # val=1 00:07:27.250 06:46:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.250 06:46:31 -- accel/accel.sh@20 -- # IFS=: 00:07:27.250 06:46:31 -- accel/accel.sh@20 -- # read -r var val 00:07:27.250 06:46:31 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:27.250 06:46:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.250 06:46:31 -- accel/accel.sh@20 -- # IFS=: 00:07:27.250 06:46:31 -- accel/accel.sh@20 -- # read -r var val 00:07:27.250 06:46:31 -- accel/accel.sh@21 -- # val=Yes 00:07:27.250 06:46:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.250 06:46:31 -- accel/accel.sh@20 -- # IFS=: 00:07:27.250 06:46:31 -- accel/accel.sh@20 -- # read -r var val 00:07:27.250 06:46:31 -- accel/accel.sh@21 -- # val= 00:07:27.250 06:46:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.250 06:46:31 -- accel/accel.sh@20 -- # IFS=: 00:07:27.250 06:46:31 -- accel/accel.sh@20 -- # read -r var val 00:07:27.250 06:46:31 -- accel/accel.sh@21 -- # val= 00:07:27.250 06:46:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.250 06:46:31 -- accel/accel.sh@20 -- # IFS=: 00:07:27.250 06:46:31 -- accel/accel.sh@20 -- # read -r var val 00:07:28.216 06:46:32 -- accel/accel.sh@21 -- # val= 00:07:28.216 06:46:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.216 06:46:32 -- accel/accel.sh@20 -- # IFS=: 00:07:28.216 06:46:32 -- accel/accel.sh@20 -- # read -r var val 00:07:28.216 06:46:32 -- accel/accel.sh@21 -- # val= 00:07:28.216 06:46:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.216 06:46:32 -- accel/accel.sh@20 -- # IFS=: 00:07:28.216 06:46:32 -- accel/accel.sh@20 -- # read -r var val 00:07:28.216 06:46:32 -- accel/accel.sh@21 -- # val= 00:07:28.216 06:46:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.216 06:46:32 -- accel/accel.sh@20 -- # IFS=: 00:07:28.216 06:46:32 -- accel/accel.sh@20 -- # read -r var val 00:07:28.216 06:46:32 -- accel/accel.sh@21 -- # val= 00:07:28.216 06:46:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.216 06:46:32 -- accel/accel.sh@20 -- # IFS=: 00:07:28.216 06:46:32 -- accel/accel.sh@20 -- # read -r var val 00:07:28.216 06:46:32 -- accel/accel.sh@21 -- # val= 00:07:28.216 06:46:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.216 06:46:32 -- accel/accel.sh@20 -- # IFS=: 00:07:28.216 06:46:32 -- accel/accel.sh@20 -- # read -r var val 00:07:28.216 06:46:32 -- accel/accel.sh@21 -- # val= 00:07:28.216 06:46:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.216 06:46:32 -- accel/accel.sh@20 -- # IFS=: 00:07:28.216 06:46:32 -- accel/accel.sh@20 -- # read -r var val 00:07:28.216 06:46:32 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:28.216 06:46:32 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:28.216 06:46:32 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:28.216 00:07:28.216 real 0m2.637s 00:07:28.216 user 0m2.286s 00:07:28.216 sys 0m0.149s 00:07:28.216 06:46:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:28.216 06:46:32 -- common/autotest_common.sh@10 -- # set +x 00:07:28.216 ************************************ 00:07:28.216 END TEST accel_xor 00:07:28.216 ************************************ 00:07:28.476 06:46:32 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:28.476 06:46:32 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:28.476 06:46:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:28.476 06:46:32 -- common/autotest_common.sh@10 -- # set +x 00:07:28.476 ************************************ 00:07:28.476 START TEST accel_xor 00:07:28.476 ************************************ 00:07:28.476 06:46:32 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3 00:07:28.476 06:46:32 -- accel/accel.sh@16 -- # local accel_opc 00:07:28.476 06:46:32 -- accel/accel.sh@17 -- # local accel_module 00:07:28.476 06:46:32 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:07:28.476 06:46:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:28.476 06:46:32 -- accel/accel.sh@12 -- # build_accel_config 00:07:28.476 06:46:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:28.476 06:46:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.476 06:46:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.476 06:46:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:28.476 06:46:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:28.476 06:46:32 -- accel/accel.sh@41 -- # local IFS=, 00:07:28.476 06:46:32 -- accel/accel.sh@42 -- # jq -r . 00:07:28.476 [2024-12-13 06:46:32.777456] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:28.476 [2024-12-13 06:46:32.777545] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68653 ] 00:07:28.476 [2024-12-13 06:46:32.914675] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.476 [2024-12-13 06:46:32.945092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.854 06:46:34 -- accel/accel.sh@18 -- # out=' 00:07:29.854 SPDK Configuration: 00:07:29.854 Core mask: 0x1 00:07:29.854 00:07:29.854 Accel Perf Configuration: 00:07:29.854 Workload Type: xor 00:07:29.854 Source buffers: 3 00:07:29.854 Transfer size: 4096 bytes 00:07:29.854 Vector count 1 00:07:29.854 Module: software 00:07:29.854 Queue depth: 32 00:07:29.854 Allocate depth: 32 00:07:29.854 # threads/core: 1 00:07:29.854 Run time: 1 seconds 00:07:29.855 Verify: Yes 00:07:29.855 00:07:29.855 Running for 1 seconds... 00:07:29.855 00:07:29.855 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:29.855 ------------------------------------------------------------------------------------ 00:07:29.855 0,0 267104/s 1043 MiB/s 0 0 00:07:29.855 ==================================================================================== 00:07:29.855 Total 267104/s 1043 MiB/s 0 0' 00:07:29.855 06:46:34 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:29.855 06:46:34 -- accel/accel.sh@20 -- # IFS=: 00:07:29.855 06:46:34 -- accel/accel.sh@20 -- # read -r var val 00:07:29.855 06:46:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:29.855 06:46:34 -- accel/accel.sh@12 -- # build_accel_config 00:07:29.855 06:46:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:29.855 06:46:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.855 06:46:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.855 06:46:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:29.855 06:46:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:29.855 06:46:34 -- accel/accel.sh@41 -- # local IFS=, 00:07:29.855 06:46:34 -- accel/accel.sh@42 -- # jq -r . 00:07:29.855 [2024-12-13 06:46:34.083407] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:29.855 [2024-12-13 06:46:34.083492] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68673 ] 00:07:29.855 [2024-12-13 06:46:34.210517] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.855 [2024-12-13 06:46:34.240851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.855 06:46:34 -- accel/accel.sh@21 -- # val= 00:07:29.855 06:46:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.855 06:46:34 -- accel/accel.sh@20 -- # IFS=: 00:07:29.855 06:46:34 -- accel/accel.sh@20 -- # read -r var val 00:07:29.855 06:46:34 -- accel/accel.sh@21 -- # val= 00:07:29.855 06:46:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.855 06:46:34 -- accel/accel.sh@20 -- # IFS=: 00:07:29.855 06:46:34 -- accel/accel.sh@20 -- # read -r var val 00:07:29.855 06:46:34 -- accel/accel.sh@21 -- # val=0x1 00:07:29.855 06:46:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.855 06:46:34 -- accel/accel.sh@20 -- # IFS=: 00:07:29.855 06:46:34 -- accel/accel.sh@20 -- # read -r var val 00:07:29.855 06:46:34 -- accel/accel.sh@21 -- # val= 00:07:29.855 06:46:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.855 06:46:34 -- accel/accel.sh@20 -- # IFS=: 00:07:29.855 06:46:34 -- accel/accel.sh@20 -- # read -r var val 00:07:29.855 06:46:34 -- accel/accel.sh@21 -- # val= 00:07:29.855 06:46:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.855 06:46:34 -- accel/accel.sh@20 -- # IFS=: 00:07:29.855 06:46:34 -- accel/accel.sh@20 -- # read -r var val 00:07:29.855 06:46:34 -- accel/accel.sh@21 -- # val=xor 00:07:29.855 06:46:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.855 06:46:34 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:29.855 06:46:34 -- accel/accel.sh@20 -- # IFS=: 00:07:29.855 06:46:34 -- accel/accel.sh@20 -- # read -r var val 00:07:29.855 06:46:34 -- accel/accel.sh@21 -- # val=3 00:07:29.855 06:46:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.855 06:46:34 -- accel/accel.sh@20 -- # IFS=: 00:07:29.855 06:46:34 -- accel/accel.sh@20 -- # read -r var val 00:07:29.855 06:46:34 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:29.855 06:46:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.855 06:46:34 -- accel/accel.sh@20 -- # IFS=: 00:07:29.855 06:46:34 -- accel/accel.sh@20 -- # read -r var val 00:07:29.855 06:46:34 -- accel/accel.sh@21 -- # val= 00:07:29.855 06:46:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.855 06:46:34 -- accel/accel.sh@20 -- # IFS=: 00:07:29.855 06:46:34 -- accel/accel.sh@20 -- # read -r var val 00:07:29.855 06:46:34 -- accel/accel.sh@21 -- # val=software 00:07:29.855 06:46:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.855 06:46:34 -- accel/accel.sh@23 -- # accel_module=software 00:07:29.855 06:46:34 -- accel/accel.sh@20 -- # IFS=: 00:07:29.855 06:46:34 -- accel/accel.sh@20 -- # read -r var val 00:07:29.855 06:46:34 -- accel/accel.sh@21 -- # val=32 00:07:29.855 06:46:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.855 06:46:34 -- accel/accel.sh@20 -- # IFS=: 00:07:29.855 06:46:34 -- accel/accel.sh@20 -- # read -r var val 00:07:29.855 06:46:34 -- accel/accel.sh@21 -- # val=32 00:07:29.855 06:46:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.855 06:46:34 -- accel/accel.sh@20 -- # IFS=: 00:07:29.855 06:46:34 -- accel/accel.sh@20 -- # read -r var val 00:07:29.855 06:46:34 -- accel/accel.sh@21 -- # val=1 00:07:29.855 06:46:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.855 06:46:34 -- accel/accel.sh@20 -- # IFS=: 00:07:29.855 06:46:34 -- accel/accel.sh@20 -- # read -r var val 00:07:29.855 06:46:34 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:29.855 06:46:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.855 06:46:34 -- accel/accel.sh@20 -- # IFS=: 00:07:29.855 06:46:34 -- accel/accel.sh@20 -- # read -r var val 00:07:29.855 06:46:34 -- accel/accel.sh@21 -- # val=Yes 00:07:29.855 06:46:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.855 06:46:34 -- accel/accel.sh@20 -- # IFS=: 00:07:29.855 06:46:34 -- accel/accel.sh@20 -- # read -r var val 00:07:29.855 06:46:34 -- accel/accel.sh@21 -- # val= 00:07:29.855 06:46:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.855 06:46:34 -- accel/accel.sh@20 -- # IFS=: 00:07:29.855 06:46:34 -- accel/accel.sh@20 -- # read -r var val 00:07:29.855 06:46:34 -- accel/accel.sh@21 -- # val= 00:07:29.855 06:46:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.855 06:46:34 -- accel/accel.sh@20 -- # IFS=: 00:07:29.855 06:46:34 -- accel/accel.sh@20 -- # read -r var val 00:07:31.234 06:46:35 -- accel/accel.sh@21 -- # val= 00:07:31.234 06:46:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.234 06:46:35 -- accel/accel.sh@20 -- # IFS=: 00:07:31.234 06:46:35 -- accel/accel.sh@20 -- # read -r var val 00:07:31.234 06:46:35 -- accel/accel.sh@21 -- # val= 00:07:31.234 06:46:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.234 06:46:35 -- accel/accel.sh@20 -- # IFS=: 00:07:31.234 06:46:35 -- accel/accel.sh@20 -- # read -r var val 00:07:31.234 06:46:35 -- accel/accel.sh@21 -- # val= 00:07:31.234 06:46:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.234 06:46:35 -- accel/accel.sh@20 -- # IFS=: 00:07:31.234 06:46:35 -- accel/accel.sh@20 -- # read -r var val 00:07:31.234 06:46:35 -- accel/accel.sh@21 -- # val= 00:07:31.234 06:46:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.234 06:46:35 -- accel/accel.sh@20 -- # IFS=: 00:07:31.234 06:46:35 -- accel/accel.sh@20 -- # read -r var val 00:07:31.234 06:46:35 -- accel/accel.sh@21 -- # val= 00:07:31.234 06:46:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.234 06:46:35 -- accel/accel.sh@20 -- # IFS=: 00:07:31.234 06:46:35 -- accel/accel.sh@20 -- # read -r var val 00:07:31.234 06:46:35 -- accel/accel.sh@21 -- # val= 00:07:31.234 06:46:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.234 06:46:35 -- accel/accel.sh@20 -- # IFS=: 00:07:31.234 06:46:35 -- accel/accel.sh@20 -- # read -r var val 00:07:31.234 06:46:35 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:31.234 06:46:35 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:31.234 06:46:35 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:31.234 00:07:31.234 real 0m2.609s 00:07:31.234 user 0m2.286s 00:07:31.234 sys 0m0.124s 00:07:31.234 ************************************ 00:07:31.234 END TEST accel_xor 00:07:31.234 ************************************ 00:07:31.234 06:46:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:31.234 06:46:35 -- common/autotest_common.sh@10 -- # set +x 00:07:31.234 06:46:35 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:31.234 06:46:35 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:31.234 06:46:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:31.234 06:46:35 -- common/autotest_common.sh@10 -- # set +x 00:07:31.234 ************************************ 00:07:31.234 START TEST accel_dif_verify 00:07:31.234 ************************************ 00:07:31.234 06:46:35 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify 00:07:31.234 06:46:35 -- accel/accel.sh@16 -- # local accel_opc 00:07:31.234 06:46:35 -- accel/accel.sh@17 -- # local accel_module 00:07:31.234 06:46:35 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:07:31.234 06:46:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:31.234 06:46:35 -- accel/accel.sh@12 -- # build_accel_config 00:07:31.234 06:46:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:31.234 06:46:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.234 06:46:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.234 06:46:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:31.234 06:46:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:31.234 06:46:35 -- accel/accel.sh@41 -- # local IFS=, 00:07:31.234 06:46:35 -- accel/accel.sh@42 -- # jq -r . 00:07:31.234 [2024-12-13 06:46:35.433092] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:31.234 [2024-12-13 06:46:35.433174] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68702 ] 00:07:31.234 [2024-12-13 06:46:35.563715] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.234 [2024-12-13 06:46:35.594140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.612 06:46:36 -- accel/accel.sh@18 -- # out=' 00:07:32.612 SPDK Configuration: 00:07:32.612 Core mask: 0x1 00:07:32.612 00:07:32.612 Accel Perf Configuration: 00:07:32.612 Workload Type: dif_verify 00:07:32.612 Vector size: 4096 bytes 00:07:32.612 Transfer size: 4096 bytes 00:07:32.612 Block size: 512 bytes 00:07:32.612 Metadata size: 8 bytes 00:07:32.612 Vector count 1 00:07:32.612 Module: software 00:07:32.612 Queue depth: 32 00:07:32.612 Allocate depth: 32 00:07:32.612 # threads/core: 1 00:07:32.612 Run time: 1 seconds 00:07:32.612 Verify: No 00:07:32.612 00:07:32.612 Running for 1 seconds... 00:07:32.612 00:07:32.612 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:32.612 ------------------------------------------------------------------------------------ 00:07:32.612 0,0 116288/s 461 MiB/s 0 0 00:07:32.612 ==================================================================================== 00:07:32.612 Total 116288/s 454 MiB/s 0 0' 00:07:32.612 06:46:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:32.612 06:46:36 -- accel/accel.sh@20 -- # IFS=: 00:07:32.612 06:46:36 -- accel/accel.sh@20 -- # read -r var val 00:07:32.612 06:46:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:32.612 06:46:36 -- accel/accel.sh@12 -- # build_accel_config 00:07:32.612 06:46:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:32.612 06:46:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.612 06:46:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.612 06:46:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:32.612 06:46:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:32.612 06:46:36 -- accel/accel.sh@41 -- # local IFS=, 00:07:32.612 06:46:36 -- accel/accel.sh@42 -- # jq -r . 00:07:32.612 [2024-12-13 06:46:36.734665] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:32.612 [2024-12-13 06:46:36.734766] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68716 ] 00:07:32.612 [2024-12-13 06:46:36.872081] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.612 [2024-12-13 06:46:36.902327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.612 06:46:36 -- accel/accel.sh@21 -- # val= 00:07:32.612 06:46:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.612 06:46:36 -- accel/accel.sh@20 -- # IFS=: 00:07:32.612 06:46:36 -- accel/accel.sh@20 -- # read -r var val 00:07:32.612 06:46:36 -- accel/accel.sh@21 -- # val= 00:07:32.612 06:46:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.612 06:46:36 -- accel/accel.sh@20 -- # IFS=: 00:07:32.612 06:46:36 -- accel/accel.sh@20 -- # read -r var val 00:07:32.612 06:46:36 -- accel/accel.sh@21 -- # val=0x1 00:07:32.612 06:46:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.612 06:46:36 -- accel/accel.sh@20 -- # IFS=: 00:07:32.612 06:46:36 -- accel/accel.sh@20 -- # read -r var val 00:07:32.612 06:46:36 -- accel/accel.sh@21 -- # val= 00:07:32.612 06:46:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.612 06:46:36 -- accel/accel.sh@20 -- # IFS=: 00:07:32.612 06:46:36 -- accel/accel.sh@20 -- # read -r var val 00:07:32.612 06:46:36 -- accel/accel.sh@21 -- # val= 00:07:32.612 06:46:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.612 06:46:36 -- accel/accel.sh@20 -- # IFS=: 00:07:32.612 06:46:36 -- accel/accel.sh@20 -- # read -r var val 00:07:32.612 06:46:36 -- accel/accel.sh@21 -- # val=dif_verify 00:07:32.612 06:46:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.612 06:46:36 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:07:32.612 06:46:36 -- accel/accel.sh@20 -- # IFS=: 00:07:32.612 06:46:36 -- accel/accel.sh@20 -- # read -r var val 00:07:32.612 06:46:36 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:32.612 06:46:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.612 06:46:36 -- accel/accel.sh@20 -- # IFS=: 00:07:32.612 06:46:36 -- accel/accel.sh@20 -- # read -r var val 00:07:32.612 06:46:36 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:32.612 06:46:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.612 06:46:36 -- accel/accel.sh@20 -- # IFS=: 00:07:32.612 06:46:36 -- accel/accel.sh@20 -- # read -r var val 00:07:32.612 06:46:36 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:32.612 06:46:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.612 06:46:36 -- accel/accel.sh@20 -- # IFS=: 00:07:32.613 06:46:36 -- accel/accel.sh@20 -- # read -r var val 00:07:32.613 06:46:36 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:32.613 06:46:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.613 06:46:36 -- accel/accel.sh@20 -- # IFS=: 00:07:32.613 06:46:36 -- accel/accel.sh@20 -- # read -r var val 00:07:32.613 06:46:36 -- accel/accel.sh@21 -- # val= 00:07:32.613 06:46:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.613 06:46:36 -- accel/accel.sh@20 -- # IFS=: 00:07:32.613 06:46:36 -- accel/accel.sh@20 -- # read -r var val 00:07:32.613 06:46:36 -- accel/accel.sh@21 -- # val=software 00:07:32.613 06:46:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.613 06:46:36 -- accel/accel.sh@23 -- # accel_module=software 00:07:32.613 06:46:36 -- accel/accel.sh@20 -- # IFS=: 00:07:32.613 06:46:36 -- accel/accel.sh@20 -- # read -r var val 00:07:32.613 06:46:36 -- accel/accel.sh@21 -- # val=32 00:07:32.613 06:46:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.613 06:46:36 -- accel/accel.sh@20 -- # IFS=: 00:07:32.613 06:46:36 -- accel/accel.sh@20 -- # read -r var val 00:07:32.613 06:46:36 -- accel/accel.sh@21 -- # val=32 00:07:32.613 06:46:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.613 06:46:36 -- accel/accel.sh@20 -- # IFS=: 00:07:32.613 06:46:36 -- accel/accel.sh@20 -- # read -r var val 00:07:32.613 06:46:36 -- accel/accel.sh@21 -- # val=1 00:07:32.613 06:46:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.613 06:46:36 -- accel/accel.sh@20 -- # IFS=: 00:07:32.613 06:46:36 -- accel/accel.sh@20 -- # read -r var val 00:07:32.613 06:46:36 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:32.613 06:46:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.613 06:46:36 -- accel/accel.sh@20 -- # IFS=: 00:07:32.613 06:46:36 -- accel/accel.sh@20 -- # read -r var val 00:07:32.613 06:46:36 -- accel/accel.sh@21 -- # val=No 00:07:32.613 06:46:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.613 06:46:36 -- accel/accel.sh@20 -- # IFS=: 00:07:32.613 06:46:36 -- accel/accel.sh@20 -- # read -r var val 00:07:32.613 06:46:36 -- accel/accel.sh@21 -- # val= 00:07:32.613 06:46:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.613 06:46:36 -- accel/accel.sh@20 -- # IFS=: 00:07:32.613 06:46:36 -- accel/accel.sh@20 -- # read -r var val 00:07:32.613 06:46:36 -- accel/accel.sh@21 -- # val= 00:07:32.613 06:46:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.613 06:46:36 -- accel/accel.sh@20 -- # IFS=: 00:07:32.613 06:46:36 -- accel/accel.sh@20 -- # read -r var val 00:07:33.550 06:46:38 -- accel/accel.sh@21 -- # val= 00:07:33.550 06:46:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.550 06:46:38 -- accel/accel.sh@20 -- # IFS=: 00:07:33.550 06:46:38 -- accel/accel.sh@20 -- # read -r var val 00:07:33.550 06:46:38 -- accel/accel.sh@21 -- # val= 00:07:33.550 06:46:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.550 06:46:38 -- accel/accel.sh@20 -- # IFS=: 00:07:33.550 06:46:38 -- accel/accel.sh@20 -- # read -r var val 00:07:33.550 06:46:38 -- accel/accel.sh@21 -- # val= 00:07:33.550 06:46:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.550 06:46:38 -- accel/accel.sh@20 -- # IFS=: 00:07:33.550 06:46:38 -- accel/accel.sh@20 -- # read -r var val 00:07:33.550 06:46:38 -- accel/accel.sh@21 -- # val= 00:07:33.550 06:46:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.550 06:46:38 -- accel/accel.sh@20 -- # IFS=: 00:07:33.550 06:46:38 -- accel/accel.sh@20 -- # read -r var val 00:07:33.550 06:46:38 -- accel/accel.sh@21 -- # val= 00:07:33.550 06:46:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.550 06:46:38 -- accel/accel.sh@20 -- # IFS=: 00:07:33.550 06:46:38 -- accel/accel.sh@20 -- # read -r var val 00:07:33.550 06:46:38 -- accel/accel.sh@21 -- # val= 00:07:33.550 06:46:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.550 06:46:38 -- accel/accel.sh@20 -- # IFS=: 00:07:33.550 06:46:38 -- accel/accel.sh@20 -- # read -r var val 00:07:33.550 06:46:38 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:33.550 06:46:38 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:07:33.550 06:46:38 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:33.550 00:07:33.550 real 0m2.608s 00:07:33.550 user 0m2.285s 00:07:33.550 sys 0m0.125s 00:07:33.550 06:46:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:33.550 ************************************ 00:07:33.550 END TEST accel_dif_verify 00:07:33.550 ************************************ 00:07:33.550 06:46:38 -- common/autotest_common.sh@10 -- # set +x 00:07:33.550 06:46:38 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:33.550 06:46:38 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:33.550 06:46:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:33.550 06:46:38 -- common/autotest_common.sh@10 -- # set +x 00:07:33.809 ************************************ 00:07:33.809 START TEST accel_dif_generate 00:07:33.809 ************************************ 00:07:33.809 06:46:38 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate 00:07:33.809 06:46:38 -- accel/accel.sh@16 -- # local accel_opc 00:07:33.809 06:46:38 -- accel/accel.sh@17 -- # local accel_module 00:07:33.809 06:46:38 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:07:33.809 06:46:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:33.809 06:46:38 -- accel/accel.sh@12 -- # build_accel_config 00:07:33.809 06:46:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:33.809 06:46:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.809 06:46:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.809 06:46:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:33.809 06:46:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:33.809 06:46:38 -- accel/accel.sh@41 -- # local IFS=, 00:07:33.809 06:46:38 -- accel/accel.sh@42 -- # jq -r . 00:07:33.809 [2024-12-13 06:46:38.098669] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:33.809 [2024-12-13 06:46:38.098768] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68750 ] 00:07:33.809 [2024-12-13 06:46:38.235580] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.809 [2024-12-13 06:46:38.266336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.186 06:46:39 -- accel/accel.sh@18 -- # out=' 00:07:35.186 SPDK Configuration: 00:07:35.186 Core mask: 0x1 00:07:35.186 00:07:35.186 Accel Perf Configuration: 00:07:35.186 Workload Type: dif_generate 00:07:35.186 Vector size: 4096 bytes 00:07:35.186 Transfer size: 4096 bytes 00:07:35.186 Block size: 512 bytes 00:07:35.186 Metadata size: 8 bytes 00:07:35.186 Vector count 1 00:07:35.186 Module: software 00:07:35.186 Queue depth: 32 00:07:35.186 Allocate depth: 32 00:07:35.186 # threads/core: 1 00:07:35.186 Run time: 1 seconds 00:07:35.186 Verify: No 00:07:35.186 00:07:35.186 Running for 1 seconds... 00:07:35.186 00:07:35.186 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:35.186 ------------------------------------------------------------------------------------ 00:07:35.186 0,0 142592/s 565 MiB/s 0 0 00:07:35.186 ==================================================================================== 00:07:35.186 Total 142592/s 557 MiB/s 0 0' 00:07:35.186 06:46:39 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:35.186 06:46:39 -- accel/accel.sh@20 -- # IFS=: 00:07:35.186 06:46:39 -- accel/accel.sh@20 -- # read -r var val 00:07:35.186 06:46:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:35.186 06:46:39 -- accel/accel.sh@12 -- # build_accel_config 00:07:35.186 06:46:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:35.186 06:46:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.186 06:46:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.186 06:46:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:35.186 06:46:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:35.186 06:46:39 -- accel/accel.sh@41 -- # local IFS=, 00:07:35.186 06:46:39 -- accel/accel.sh@42 -- # jq -r . 00:07:35.186 [2024-12-13 06:46:39.398139] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:35.186 [2024-12-13 06:46:39.398208] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68770 ] 00:07:35.186 [2024-12-13 06:46:39.527909] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.186 [2024-12-13 06:46:39.558194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.186 06:46:39 -- accel/accel.sh@21 -- # val= 00:07:35.186 06:46:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.186 06:46:39 -- accel/accel.sh@20 -- # IFS=: 00:07:35.186 06:46:39 -- accel/accel.sh@20 -- # read -r var val 00:07:35.186 06:46:39 -- accel/accel.sh@21 -- # val= 00:07:35.186 06:46:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.186 06:46:39 -- accel/accel.sh@20 -- # IFS=: 00:07:35.186 06:46:39 -- accel/accel.sh@20 -- # read -r var val 00:07:35.186 06:46:39 -- accel/accel.sh@21 -- # val=0x1 00:07:35.186 06:46:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.186 06:46:39 -- accel/accel.sh@20 -- # IFS=: 00:07:35.186 06:46:39 -- accel/accel.sh@20 -- # read -r var val 00:07:35.186 06:46:39 -- accel/accel.sh@21 -- # val= 00:07:35.186 06:46:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.186 06:46:39 -- accel/accel.sh@20 -- # IFS=: 00:07:35.186 06:46:39 -- accel/accel.sh@20 -- # read -r var val 00:07:35.186 06:46:39 -- accel/accel.sh@21 -- # val= 00:07:35.186 06:46:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.186 06:46:39 -- accel/accel.sh@20 -- # IFS=: 00:07:35.186 06:46:39 -- accel/accel.sh@20 -- # read -r var val 00:07:35.186 06:46:39 -- accel/accel.sh@21 -- # val=dif_generate 00:07:35.186 06:46:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.186 06:46:39 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:07:35.186 06:46:39 -- accel/accel.sh@20 -- # IFS=: 00:07:35.186 06:46:39 -- accel/accel.sh@20 -- # read -r var val 00:07:35.186 06:46:39 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:35.186 06:46:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.186 06:46:39 -- accel/accel.sh@20 -- # IFS=: 00:07:35.186 06:46:39 -- accel/accel.sh@20 -- # read -r var val 00:07:35.186 06:46:39 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:35.186 06:46:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.186 06:46:39 -- accel/accel.sh@20 -- # IFS=: 00:07:35.186 06:46:39 -- accel/accel.sh@20 -- # read -r var val 00:07:35.186 06:46:39 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:35.186 06:46:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.186 06:46:39 -- accel/accel.sh@20 -- # IFS=: 00:07:35.186 06:46:39 -- accel/accel.sh@20 -- # read -r var val 00:07:35.186 06:46:39 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:35.186 06:46:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.186 06:46:39 -- accel/accel.sh@20 -- # IFS=: 00:07:35.186 06:46:39 -- accel/accel.sh@20 -- # read -r var val 00:07:35.186 06:46:39 -- accel/accel.sh@21 -- # val= 00:07:35.186 06:46:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.186 06:46:39 -- accel/accel.sh@20 -- # IFS=: 00:07:35.186 06:46:39 -- accel/accel.sh@20 -- # read -r var val 00:07:35.186 06:46:39 -- accel/accel.sh@21 -- # val=software 00:07:35.186 06:46:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.186 06:46:39 -- accel/accel.sh@23 -- # accel_module=software 00:07:35.186 06:46:39 -- accel/accel.sh@20 -- # IFS=: 00:07:35.186 06:46:39 -- accel/accel.sh@20 -- # read -r var val 00:07:35.186 06:46:39 -- accel/accel.sh@21 -- # val=32 00:07:35.186 06:46:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.186 06:46:39 -- accel/accel.sh@20 -- # IFS=: 00:07:35.186 06:46:39 -- accel/accel.sh@20 -- # read -r var val 00:07:35.186 06:46:39 -- accel/accel.sh@21 -- # val=32 00:07:35.186 06:46:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.186 06:46:39 -- accel/accel.sh@20 -- # IFS=: 00:07:35.186 06:46:39 -- accel/accel.sh@20 -- # read -r var val 00:07:35.186 06:46:39 -- accel/accel.sh@21 -- # val=1 00:07:35.186 06:46:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.186 06:46:39 -- accel/accel.sh@20 -- # IFS=: 00:07:35.186 06:46:39 -- accel/accel.sh@20 -- # read -r var val 00:07:35.186 06:46:39 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:35.186 06:46:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.186 06:46:39 -- accel/accel.sh@20 -- # IFS=: 00:07:35.186 06:46:39 -- accel/accel.sh@20 -- # read -r var val 00:07:35.186 06:46:39 -- accel/accel.sh@21 -- # val=No 00:07:35.186 06:46:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.186 06:46:39 -- accel/accel.sh@20 -- # IFS=: 00:07:35.186 06:46:39 -- accel/accel.sh@20 -- # read -r var val 00:07:35.186 06:46:39 -- accel/accel.sh@21 -- # val= 00:07:35.186 06:46:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.186 06:46:39 -- accel/accel.sh@20 -- # IFS=: 00:07:35.186 06:46:39 -- accel/accel.sh@20 -- # read -r var val 00:07:35.186 06:46:39 -- accel/accel.sh@21 -- # val= 00:07:35.186 06:46:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.186 06:46:39 -- accel/accel.sh@20 -- # IFS=: 00:07:35.186 06:46:39 -- accel/accel.sh@20 -- # read -r var val 00:07:36.561 06:46:40 -- accel/accel.sh@21 -- # val= 00:07:36.561 06:46:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.561 06:46:40 -- accel/accel.sh@20 -- # IFS=: 00:07:36.561 06:46:40 -- accel/accel.sh@20 -- # read -r var val 00:07:36.561 06:46:40 -- accel/accel.sh@21 -- # val= 00:07:36.561 06:46:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.561 06:46:40 -- accel/accel.sh@20 -- # IFS=: 00:07:36.561 06:46:40 -- accel/accel.sh@20 -- # read -r var val 00:07:36.561 06:46:40 -- accel/accel.sh@21 -- # val= 00:07:36.561 06:46:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.561 06:46:40 -- accel/accel.sh@20 -- # IFS=: 00:07:36.561 06:46:40 -- accel/accel.sh@20 -- # read -r var val 00:07:36.561 06:46:40 -- accel/accel.sh@21 -- # val= 00:07:36.561 06:46:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.561 06:46:40 -- accel/accel.sh@20 -- # IFS=: 00:07:36.561 06:46:40 -- accel/accel.sh@20 -- # read -r var val 00:07:36.561 06:46:40 -- accel/accel.sh@21 -- # val= 00:07:36.561 06:46:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.561 06:46:40 -- accel/accel.sh@20 -- # IFS=: 00:07:36.561 06:46:40 -- accel/accel.sh@20 -- # read -r var val 00:07:36.561 06:46:40 -- accel/accel.sh@21 -- # val= 00:07:36.561 06:46:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.561 06:46:40 -- accel/accel.sh@20 -- # IFS=: 00:07:36.561 06:46:40 -- accel/accel.sh@20 -- # read -r var val 00:07:36.561 06:46:40 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:36.561 06:46:40 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:07:36.561 06:46:40 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:36.561 00:07:36.561 real 0m2.609s 00:07:36.561 user 0m2.283s 00:07:36.561 sys 0m0.127s 00:07:36.561 06:46:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:36.561 06:46:40 -- common/autotest_common.sh@10 -- # set +x 00:07:36.561 ************************************ 00:07:36.561 END TEST accel_dif_generate 00:07:36.561 ************************************ 00:07:36.561 06:46:40 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:36.561 06:46:40 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:36.561 06:46:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:36.561 06:46:40 -- common/autotest_common.sh@10 -- # set +x 00:07:36.561 ************************************ 00:07:36.561 START TEST accel_dif_generate_copy 00:07:36.561 ************************************ 00:07:36.561 06:46:40 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy 00:07:36.561 06:46:40 -- accel/accel.sh@16 -- # local accel_opc 00:07:36.561 06:46:40 -- accel/accel.sh@17 -- # local accel_module 00:07:36.561 06:46:40 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:07:36.561 06:46:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:36.561 06:46:40 -- accel/accel.sh@12 -- # build_accel_config 00:07:36.561 06:46:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:36.561 06:46:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:36.561 06:46:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:36.561 06:46:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:36.561 06:46:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:36.561 06:46:40 -- accel/accel.sh@41 -- # local IFS=, 00:07:36.561 06:46:40 -- accel/accel.sh@42 -- # jq -r . 00:07:36.561 [2024-12-13 06:46:40.759645] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:36.561 [2024-12-13 06:46:40.759922] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68801 ] 00:07:36.561 [2024-12-13 06:46:40.896657] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.561 [2024-12-13 06:46:40.927113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.941 06:46:42 -- accel/accel.sh@18 -- # out=' 00:07:37.941 SPDK Configuration: 00:07:37.941 Core mask: 0x1 00:07:37.941 00:07:37.941 Accel Perf Configuration: 00:07:37.941 Workload Type: dif_generate_copy 00:07:37.941 Vector size: 4096 bytes 00:07:37.941 Transfer size: 4096 bytes 00:07:37.941 Vector count 1 00:07:37.941 Module: software 00:07:37.941 Queue depth: 32 00:07:37.941 Allocate depth: 32 00:07:37.941 # threads/core: 1 00:07:37.941 Run time: 1 seconds 00:07:37.941 Verify: No 00:07:37.941 00:07:37.941 Running for 1 seconds... 00:07:37.941 00:07:37.941 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:37.941 ------------------------------------------------------------------------------------ 00:07:37.941 0,0 110144/s 436 MiB/s 0 0 00:07:37.941 ==================================================================================== 00:07:37.941 Total 110144/s 430 MiB/s 0 0' 00:07:37.941 06:46:42 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:37.941 06:46:42 -- accel/accel.sh@20 -- # IFS=: 00:07:37.941 06:46:42 -- accel/accel.sh@20 -- # read -r var val 00:07:37.941 06:46:42 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:37.941 06:46:42 -- accel/accel.sh@12 -- # build_accel_config 00:07:37.941 06:46:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:37.941 06:46:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.941 06:46:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.941 06:46:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:37.941 06:46:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:37.941 06:46:42 -- accel/accel.sh@41 -- # local IFS=, 00:07:37.941 06:46:42 -- accel/accel.sh@42 -- # jq -r . 00:07:37.941 [2024-12-13 06:46:42.069565] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:37.941 [2024-12-13 06:46:42.069656] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68820 ] 00:07:37.941 [2024-12-13 06:46:42.200339] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.941 [2024-12-13 06:46:42.235475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.941 06:46:42 -- accel/accel.sh@21 -- # val= 00:07:37.941 06:46:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.941 06:46:42 -- accel/accel.sh@20 -- # IFS=: 00:07:37.941 06:46:42 -- accel/accel.sh@20 -- # read -r var val 00:07:37.941 06:46:42 -- accel/accel.sh@21 -- # val= 00:07:37.941 06:46:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.941 06:46:42 -- accel/accel.sh@20 -- # IFS=: 00:07:37.941 06:46:42 -- accel/accel.sh@20 -- # read -r var val 00:07:37.941 06:46:42 -- accel/accel.sh@21 -- # val=0x1 00:07:37.941 06:46:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.941 06:46:42 -- accel/accel.sh@20 -- # IFS=: 00:07:37.941 06:46:42 -- accel/accel.sh@20 -- # read -r var val 00:07:37.941 06:46:42 -- accel/accel.sh@21 -- # val= 00:07:37.941 06:46:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.941 06:46:42 -- accel/accel.sh@20 -- # IFS=: 00:07:37.941 06:46:42 -- accel/accel.sh@20 -- # read -r var val 00:07:37.941 06:46:42 -- accel/accel.sh@21 -- # val= 00:07:37.941 06:46:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.941 06:46:42 -- accel/accel.sh@20 -- # IFS=: 00:07:37.941 06:46:42 -- accel/accel.sh@20 -- # read -r var val 00:07:37.941 06:46:42 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:07:37.941 06:46:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.941 06:46:42 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:07:37.941 06:46:42 -- accel/accel.sh@20 -- # IFS=: 00:07:37.941 06:46:42 -- accel/accel.sh@20 -- # read -r var val 00:07:37.941 06:46:42 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:37.941 06:46:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.941 06:46:42 -- accel/accel.sh@20 -- # IFS=: 00:07:37.941 06:46:42 -- accel/accel.sh@20 -- # read -r var val 00:07:37.941 06:46:42 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:37.941 06:46:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.941 06:46:42 -- accel/accel.sh@20 -- # IFS=: 00:07:37.941 06:46:42 -- accel/accel.sh@20 -- # read -r var val 00:07:37.941 06:46:42 -- accel/accel.sh@21 -- # val= 00:07:37.941 06:46:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.941 06:46:42 -- accel/accel.sh@20 -- # IFS=: 00:07:37.941 06:46:42 -- accel/accel.sh@20 -- # read -r var val 00:07:37.941 06:46:42 -- accel/accel.sh@21 -- # val=software 00:07:37.941 06:46:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.941 06:46:42 -- accel/accel.sh@23 -- # accel_module=software 00:07:37.941 06:46:42 -- accel/accel.sh@20 -- # IFS=: 00:07:37.942 06:46:42 -- accel/accel.sh@20 -- # read -r var val 00:07:37.942 06:46:42 -- accel/accel.sh@21 -- # val=32 00:07:37.942 06:46:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.942 06:46:42 -- accel/accel.sh@20 -- # IFS=: 00:07:37.942 06:46:42 -- accel/accel.sh@20 -- # read -r var val 00:07:37.942 06:46:42 -- accel/accel.sh@21 -- # val=32 00:07:37.942 06:46:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.942 06:46:42 -- accel/accel.sh@20 -- # IFS=: 00:07:37.942 06:46:42 -- accel/accel.sh@20 -- # read -r var val 00:07:37.942 06:46:42 -- accel/accel.sh@21 -- # val=1 00:07:37.942 06:46:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.942 06:46:42 -- accel/accel.sh@20 -- # IFS=: 00:07:37.942 06:46:42 -- accel/accel.sh@20 -- # read -r var val 00:07:37.942 06:46:42 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:37.942 06:46:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.942 06:46:42 -- accel/accel.sh@20 -- # IFS=: 00:07:37.942 06:46:42 -- accel/accel.sh@20 -- # read -r var val 00:07:37.942 06:46:42 -- accel/accel.sh@21 -- # val=No 00:07:37.942 06:46:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.942 06:46:42 -- accel/accel.sh@20 -- # IFS=: 00:07:37.942 06:46:42 -- accel/accel.sh@20 -- # read -r var val 00:07:37.942 06:46:42 -- accel/accel.sh@21 -- # val= 00:07:37.942 06:46:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.942 06:46:42 -- accel/accel.sh@20 -- # IFS=: 00:07:37.942 06:46:42 -- accel/accel.sh@20 -- # read -r var val 00:07:37.942 06:46:42 -- accel/accel.sh@21 -- # val= 00:07:37.942 06:46:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.942 06:46:42 -- accel/accel.sh@20 -- # IFS=: 00:07:37.942 06:46:42 -- accel/accel.sh@20 -- # read -r var val 00:07:38.879 06:46:43 -- accel/accel.sh@21 -- # val= 00:07:38.879 06:46:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.879 06:46:43 -- accel/accel.sh@20 -- # IFS=: 00:07:38.879 06:46:43 -- accel/accel.sh@20 -- # read -r var val 00:07:38.879 06:46:43 -- accel/accel.sh@21 -- # val= 00:07:38.879 06:46:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.879 06:46:43 -- accel/accel.sh@20 -- # IFS=: 00:07:38.879 06:46:43 -- accel/accel.sh@20 -- # read -r var val 00:07:38.879 06:46:43 -- accel/accel.sh@21 -- # val= 00:07:38.879 06:46:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.879 06:46:43 -- accel/accel.sh@20 -- # IFS=: 00:07:38.879 06:46:43 -- accel/accel.sh@20 -- # read -r var val 00:07:38.879 06:46:43 -- accel/accel.sh@21 -- # val= 00:07:38.879 06:46:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.879 06:46:43 -- accel/accel.sh@20 -- # IFS=: 00:07:38.879 06:46:43 -- accel/accel.sh@20 -- # read -r var val 00:07:38.879 06:46:43 -- accel/accel.sh@21 -- # val= 00:07:38.879 06:46:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.879 06:46:43 -- accel/accel.sh@20 -- # IFS=: 00:07:38.879 06:46:43 -- accel/accel.sh@20 -- # read -r var val 00:07:38.879 06:46:43 -- accel/accel.sh@21 -- # val= 00:07:38.879 06:46:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.879 06:46:43 -- accel/accel.sh@20 -- # IFS=: 00:07:38.879 06:46:43 -- accel/accel.sh@20 -- # read -r var val 00:07:38.879 06:46:43 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:38.879 06:46:43 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:07:38.879 06:46:43 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:38.879 00:07:38.879 real 0m2.622s 00:07:38.879 user 0m2.272s 00:07:38.879 sys 0m0.147s 00:07:38.879 ************************************ 00:07:38.879 END TEST accel_dif_generate_copy 00:07:38.879 ************************************ 00:07:38.879 06:46:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:38.879 06:46:43 -- common/autotest_common.sh@10 -- # set +x 00:07:38.879 06:46:43 -- accel/accel.sh@107 -- # [[ y == y ]] 00:07:38.879 06:46:43 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:38.879 06:46:43 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:38.879 06:46:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:38.879 06:46:43 -- common/autotest_common.sh@10 -- # set +x 00:07:39.138 ************************************ 00:07:39.138 START TEST accel_comp 00:07:39.138 ************************************ 00:07:39.138 06:46:43 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:39.138 06:46:43 -- accel/accel.sh@16 -- # local accel_opc 00:07:39.138 06:46:43 -- accel/accel.sh@17 -- # local accel_module 00:07:39.138 06:46:43 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:39.138 06:46:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:39.138 06:46:43 -- accel/accel.sh@12 -- # build_accel_config 00:07:39.138 06:46:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:39.138 06:46:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:39.138 06:46:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:39.138 06:46:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:39.138 06:46:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:39.138 06:46:43 -- accel/accel.sh@41 -- # local IFS=, 00:07:39.138 06:46:43 -- accel/accel.sh@42 -- # jq -r . 00:07:39.138 [2024-12-13 06:46:43.424800] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:39.138 [2024-12-13 06:46:43.424881] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68855 ] 00:07:39.138 [2024-12-13 06:46:43.553211] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.138 [2024-12-13 06:46:43.584176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.515 06:46:44 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:40.515 00:07:40.515 SPDK Configuration: 00:07:40.515 Core mask: 0x1 00:07:40.515 00:07:40.515 Accel Perf Configuration: 00:07:40.515 Workload Type: compress 00:07:40.515 Transfer size: 4096 bytes 00:07:40.515 Vector count 1 00:07:40.515 Module: software 00:07:40.515 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:40.515 Queue depth: 32 00:07:40.515 Allocate depth: 32 00:07:40.515 # threads/core: 1 00:07:40.515 Run time: 1 seconds 00:07:40.515 Verify: No 00:07:40.515 00:07:40.515 Running for 1 seconds... 00:07:40.515 00:07:40.515 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:40.515 ------------------------------------------------------------------------------------ 00:07:40.515 0,0 55776/s 232 MiB/s 0 0 00:07:40.515 ==================================================================================== 00:07:40.515 Total 55776/s 217 MiB/s 0 0' 00:07:40.515 06:46:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:40.515 06:46:44 -- accel/accel.sh@20 -- # IFS=: 00:07:40.515 06:46:44 -- accel/accel.sh@20 -- # read -r var val 00:07:40.515 06:46:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:40.515 06:46:44 -- accel/accel.sh@12 -- # build_accel_config 00:07:40.515 06:46:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:40.515 06:46:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:40.515 06:46:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:40.515 06:46:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:40.515 06:46:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:40.515 06:46:44 -- accel/accel.sh@41 -- # local IFS=, 00:07:40.515 06:46:44 -- accel/accel.sh@42 -- # jq -r . 00:07:40.515 [2024-12-13 06:46:44.725520] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:40.515 [2024-12-13 06:46:44.725613] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68869 ] 00:07:40.515 [2024-12-13 06:46:44.859636] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.515 [2024-12-13 06:46:44.889820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.515 06:46:44 -- accel/accel.sh@21 -- # val= 00:07:40.515 06:46:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.515 06:46:44 -- accel/accel.sh@20 -- # IFS=: 00:07:40.515 06:46:44 -- accel/accel.sh@20 -- # read -r var val 00:07:40.515 06:46:44 -- accel/accel.sh@21 -- # val= 00:07:40.515 06:46:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.515 06:46:44 -- accel/accel.sh@20 -- # IFS=: 00:07:40.515 06:46:44 -- accel/accel.sh@20 -- # read -r var val 00:07:40.515 06:46:44 -- accel/accel.sh@21 -- # val= 00:07:40.515 06:46:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.515 06:46:44 -- accel/accel.sh@20 -- # IFS=: 00:07:40.515 06:46:44 -- accel/accel.sh@20 -- # read -r var val 00:07:40.515 06:46:44 -- accel/accel.sh@21 -- # val=0x1 00:07:40.515 06:46:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.515 06:46:44 -- accel/accel.sh@20 -- # IFS=: 00:07:40.515 06:46:44 -- accel/accel.sh@20 -- # read -r var val 00:07:40.515 06:46:44 -- accel/accel.sh@21 -- # val= 00:07:40.515 06:46:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.515 06:46:44 -- accel/accel.sh@20 -- # IFS=: 00:07:40.515 06:46:44 -- accel/accel.sh@20 -- # read -r var val 00:07:40.515 06:46:44 -- accel/accel.sh@21 -- # val= 00:07:40.515 06:46:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.515 06:46:44 -- accel/accel.sh@20 -- # IFS=: 00:07:40.515 06:46:44 -- accel/accel.sh@20 -- # read -r var val 00:07:40.515 06:46:44 -- accel/accel.sh@21 -- # val=compress 00:07:40.515 06:46:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.515 06:46:44 -- accel/accel.sh@24 -- # accel_opc=compress 00:07:40.515 06:46:44 -- accel/accel.sh@20 -- # IFS=: 00:07:40.515 06:46:44 -- accel/accel.sh@20 -- # read -r var val 00:07:40.515 06:46:44 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:40.516 06:46:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.516 06:46:44 -- accel/accel.sh@20 -- # IFS=: 00:07:40.516 06:46:44 -- accel/accel.sh@20 -- # read -r var val 00:07:40.516 06:46:44 -- accel/accel.sh@21 -- # val= 00:07:40.516 06:46:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.516 06:46:44 -- accel/accel.sh@20 -- # IFS=: 00:07:40.516 06:46:44 -- accel/accel.sh@20 -- # read -r var val 00:07:40.516 06:46:44 -- accel/accel.sh@21 -- # val=software 00:07:40.516 06:46:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.516 06:46:44 -- accel/accel.sh@23 -- # accel_module=software 00:07:40.516 06:46:44 -- accel/accel.sh@20 -- # IFS=: 00:07:40.516 06:46:44 -- accel/accel.sh@20 -- # read -r var val 00:07:40.516 06:46:44 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:40.516 06:46:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.516 06:46:44 -- accel/accel.sh@20 -- # IFS=: 00:07:40.516 06:46:44 -- accel/accel.sh@20 -- # read -r var val 00:07:40.516 06:46:44 -- accel/accel.sh@21 -- # val=32 00:07:40.516 06:46:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.516 06:46:44 -- accel/accel.sh@20 -- # IFS=: 00:07:40.516 06:46:44 -- accel/accel.sh@20 -- # read -r var val 00:07:40.516 06:46:44 -- accel/accel.sh@21 -- # val=32 00:07:40.516 06:46:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.516 06:46:44 -- accel/accel.sh@20 -- # IFS=: 00:07:40.516 06:46:44 -- accel/accel.sh@20 -- # read -r var val 00:07:40.516 06:46:44 -- accel/accel.sh@21 -- # val=1 00:07:40.516 06:46:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.516 06:46:44 -- accel/accel.sh@20 -- # IFS=: 00:07:40.516 06:46:44 -- accel/accel.sh@20 -- # read -r var val 00:07:40.516 06:46:44 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:40.516 06:46:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.516 06:46:44 -- accel/accel.sh@20 -- # IFS=: 00:07:40.516 06:46:44 -- accel/accel.sh@20 -- # read -r var val 00:07:40.516 06:46:44 -- accel/accel.sh@21 -- # val=No 00:07:40.516 06:46:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.516 06:46:44 -- accel/accel.sh@20 -- # IFS=: 00:07:40.516 06:46:44 -- accel/accel.sh@20 -- # read -r var val 00:07:40.516 06:46:44 -- accel/accel.sh@21 -- # val= 00:07:40.516 06:46:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.516 06:46:44 -- accel/accel.sh@20 -- # IFS=: 00:07:40.516 06:46:44 -- accel/accel.sh@20 -- # read -r var val 00:07:40.516 06:46:44 -- accel/accel.sh@21 -- # val= 00:07:40.516 06:46:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.516 06:46:44 -- accel/accel.sh@20 -- # IFS=: 00:07:40.516 06:46:44 -- accel/accel.sh@20 -- # read -r var val 00:07:41.892 06:46:46 -- accel/accel.sh@21 -- # val= 00:07:41.892 06:46:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.892 06:46:46 -- accel/accel.sh@20 -- # IFS=: 00:07:41.892 06:46:46 -- accel/accel.sh@20 -- # read -r var val 00:07:41.892 06:46:46 -- accel/accel.sh@21 -- # val= 00:07:41.892 06:46:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.892 06:46:46 -- accel/accel.sh@20 -- # IFS=: 00:07:41.892 06:46:46 -- accel/accel.sh@20 -- # read -r var val 00:07:41.892 06:46:46 -- accel/accel.sh@21 -- # val= 00:07:41.892 06:46:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.892 06:46:46 -- accel/accel.sh@20 -- # IFS=: 00:07:41.892 06:46:46 -- accel/accel.sh@20 -- # read -r var val 00:07:41.892 06:46:46 -- accel/accel.sh@21 -- # val= 00:07:41.892 ************************************ 00:07:41.892 END TEST accel_comp 00:07:41.892 ************************************ 00:07:41.892 06:46:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.892 06:46:46 -- accel/accel.sh@20 -- # IFS=: 00:07:41.893 06:46:46 -- accel/accel.sh@20 -- # read -r var val 00:07:41.893 06:46:46 -- accel/accel.sh@21 -- # val= 00:07:41.893 06:46:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.893 06:46:46 -- accel/accel.sh@20 -- # IFS=: 00:07:41.893 06:46:46 -- accel/accel.sh@20 -- # read -r var val 00:07:41.893 06:46:46 -- accel/accel.sh@21 -- # val= 00:07:41.893 06:46:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.893 06:46:46 -- accel/accel.sh@20 -- # IFS=: 00:07:41.893 06:46:46 -- accel/accel.sh@20 -- # read -r var val 00:07:41.893 06:46:46 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:41.893 06:46:46 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:07:41.893 06:46:46 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:41.893 00:07:41.893 real 0m2.608s 00:07:41.893 user 0m2.277s 00:07:41.893 sys 0m0.131s 00:07:41.893 06:46:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:41.893 06:46:46 -- common/autotest_common.sh@10 -- # set +x 00:07:41.893 06:46:46 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:41.893 06:46:46 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:41.893 06:46:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:41.893 06:46:46 -- common/autotest_common.sh@10 -- # set +x 00:07:41.893 ************************************ 00:07:41.893 START TEST accel_decomp 00:07:41.893 ************************************ 00:07:41.893 06:46:46 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:41.893 06:46:46 -- accel/accel.sh@16 -- # local accel_opc 00:07:41.893 06:46:46 -- accel/accel.sh@17 -- # local accel_module 00:07:41.893 06:46:46 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:41.893 06:46:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:41.893 06:46:46 -- accel/accel.sh@12 -- # build_accel_config 00:07:41.893 06:46:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:41.893 06:46:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:41.893 06:46:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:41.893 06:46:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:41.893 06:46:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:41.893 06:46:46 -- accel/accel.sh@41 -- # local IFS=, 00:07:41.893 06:46:46 -- accel/accel.sh@42 -- # jq -r . 00:07:41.893 [2024-12-13 06:46:46.094154] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:41.893 [2024-12-13 06:46:46.094769] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68898 ] 00:07:41.893 [2024-12-13 06:46:46.226860] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.893 [2024-12-13 06:46:46.259940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.270 06:46:47 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:43.270 00:07:43.270 SPDK Configuration: 00:07:43.270 Core mask: 0x1 00:07:43.270 00:07:43.270 Accel Perf Configuration: 00:07:43.270 Workload Type: decompress 00:07:43.270 Transfer size: 4096 bytes 00:07:43.270 Vector count 1 00:07:43.270 Module: software 00:07:43.270 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:43.270 Queue depth: 32 00:07:43.270 Allocate depth: 32 00:07:43.270 # threads/core: 1 00:07:43.270 Run time: 1 seconds 00:07:43.270 Verify: Yes 00:07:43.270 00:07:43.270 Running for 1 seconds... 00:07:43.270 00:07:43.270 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:43.270 ------------------------------------------------------------------------------------ 00:07:43.270 0,0 80512/s 148 MiB/s 0 0 00:07:43.270 ==================================================================================== 00:07:43.270 Total 80512/s 314 MiB/s 0 0' 00:07:43.270 06:46:47 -- accel/accel.sh@20 -- # IFS=: 00:07:43.270 06:46:47 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:43.270 06:46:47 -- accel/accel.sh@20 -- # read -r var val 00:07:43.270 06:46:47 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:43.270 06:46:47 -- accel/accel.sh@12 -- # build_accel_config 00:07:43.270 06:46:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:43.270 06:46:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:43.270 06:46:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:43.270 06:46:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:43.270 06:46:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:43.270 06:46:47 -- accel/accel.sh@41 -- # local IFS=, 00:07:43.270 06:46:47 -- accel/accel.sh@42 -- # jq -r . 00:07:43.270 [2024-12-13 06:46:47.414086] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:43.270 [2024-12-13 06:46:47.414334] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68923 ] 00:07:43.270 [2024-12-13 06:46:47.549824] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.270 [2024-12-13 06:46:47.580124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.270 06:46:47 -- accel/accel.sh@21 -- # val= 00:07:43.270 06:46:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.270 06:46:47 -- accel/accel.sh@20 -- # IFS=: 00:07:43.270 06:46:47 -- accel/accel.sh@20 -- # read -r var val 00:07:43.270 06:46:47 -- accel/accel.sh@21 -- # val= 00:07:43.270 06:46:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.270 06:46:47 -- accel/accel.sh@20 -- # IFS=: 00:07:43.270 06:46:47 -- accel/accel.sh@20 -- # read -r var val 00:07:43.270 06:46:47 -- accel/accel.sh@21 -- # val= 00:07:43.270 06:46:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.270 06:46:47 -- accel/accel.sh@20 -- # IFS=: 00:07:43.270 06:46:47 -- accel/accel.sh@20 -- # read -r var val 00:07:43.270 06:46:47 -- accel/accel.sh@21 -- # val=0x1 00:07:43.270 06:46:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.270 06:46:47 -- accel/accel.sh@20 -- # IFS=: 00:07:43.270 06:46:47 -- accel/accel.sh@20 -- # read -r var val 00:07:43.270 06:46:47 -- accel/accel.sh@21 -- # val= 00:07:43.270 06:46:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.270 06:46:47 -- accel/accel.sh@20 -- # IFS=: 00:07:43.270 06:46:47 -- accel/accel.sh@20 -- # read -r var val 00:07:43.270 06:46:47 -- accel/accel.sh@21 -- # val= 00:07:43.270 06:46:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.270 06:46:47 -- accel/accel.sh@20 -- # IFS=: 00:07:43.270 06:46:47 -- accel/accel.sh@20 -- # read -r var val 00:07:43.270 06:46:47 -- accel/accel.sh@21 -- # val=decompress 00:07:43.270 06:46:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.270 06:46:47 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:43.270 06:46:47 -- accel/accel.sh@20 -- # IFS=: 00:07:43.270 06:46:47 -- accel/accel.sh@20 -- # read -r var val 00:07:43.270 06:46:47 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:43.270 06:46:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.270 06:46:47 -- accel/accel.sh@20 -- # IFS=: 00:07:43.270 06:46:47 -- accel/accel.sh@20 -- # read -r var val 00:07:43.270 06:46:47 -- accel/accel.sh@21 -- # val= 00:07:43.271 06:46:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.271 06:46:47 -- accel/accel.sh@20 -- # IFS=: 00:07:43.271 06:46:47 -- accel/accel.sh@20 -- # read -r var val 00:07:43.271 06:46:47 -- accel/accel.sh@21 -- # val=software 00:07:43.271 06:46:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.271 06:46:47 -- accel/accel.sh@23 -- # accel_module=software 00:07:43.271 06:46:47 -- accel/accel.sh@20 -- # IFS=: 00:07:43.271 06:46:47 -- accel/accel.sh@20 -- # read -r var val 00:07:43.271 06:46:47 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:43.271 06:46:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.271 06:46:47 -- accel/accel.sh@20 -- # IFS=: 00:07:43.271 06:46:47 -- accel/accel.sh@20 -- # read -r var val 00:07:43.271 06:46:47 -- accel/accel.sh@21 -- # val=32 00:07:43.271 06:46:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.271 06:46:47 -- accel/accel.sh@20 -- # IFS=: 00:07:43.271 06:46:47 -- accel/accel.sh@20 -- # read -r var val 00:07:43.271 06:46:47 -- accel/accel.sh@21 -- # val=32 00:07:43.271 06:46:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.271 06:46:47 -- accel/accel.sh@20 -- # IFS=: 00:07:43.271 06:46:47 -- accel/accel.sh@20 -- # read -r var val 00:07:43.271 06:46:47 -- accel/accel.sh@21 -- # val=1 00:07:43.271 06:46:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.271 06:46:47 -- accel/accel.sh@20 -- # IFS=: 00:07:43.271 06:46:47 -- accel/accel.sh@20 -- # read -r var val 00:07:43.271 06:46:47 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:43.271 06:46:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.271 06:46:47 -- accel/accel.sh@20 -- # IFS=: 00:07:43.271 06:46:47 -- accel/accel.sh@20 -- # read -r var val 00:07:43.271 06:46:47 -- accel/accel.sh@21 -- # val=Yes 00:07:43.271 06:46:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.271 06:46:47 -- accel/accel.sh@20 -- # IFS=: 00:07:43.271 06:46:47 -- accel/accel.sh@20 -- # read -r var val 00:07:43.271 06:46:47 -- accel/accel.sh@21 -- # val= 00:07:43.271 06:46:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.271 06:46:47 -- accel/accel.sh@20 -- # IFS=: 00:07:43.271 06:46:47 -- accel/accel.sh@20 -- # read -r var val 00:07:43.271 06:46:47 -- accel/accel.sh@21 -- # val= 00:07:43.271 06:46:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.271 06:46:47 -- accel/accel.sh@20 -- # IFS=: 00:07:43.271 06:46:47 -- accel/accel.sh@20 -- # read -r var val 00:07:44.206 06:46:48 -- accel/accel.sh@21 -- # val= 00:07:44.206 06:46:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.206 06:46:48 -- accel/accel.sh@20 -- # IFS=: 00:07:44.206 06:46:48 -- accel/accel.sh@20 -- # read -r var val 00:07:44.206 06:46:48 -- accel/accel.sh@21 -- # val= 00:07:44.206 06:46:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.206 06:46:48 -- accel/accel.sh@20 -- # IFS=: 00:07:44.206 06:46:48 -- accel/accel.sh@20 -- # read -r var val 00:07:44.206 06:46:48 -- accel/accel.sh@21 -- # val= 00:07:44.206 06:46:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.206 06:46:48 -- accel/accel.sh@20 -- # IFS=: 00:07:44.206 06:46:48 -- accel/accel.sh@20 -- # read -r var val 00:07:44.206 06:46:48 -- accel/accel.sh@21 -- # val= 00:07:44.206 06:46:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.206 06:46:48 -- accel/accel.sh@20 -- # IFS=: 00:07:44.206 06:46:48 -- accel/accel.sh@20 -- # read -r var val 00:07:44.206 06:46:48 -- accel/accel.sh@21 -- # val= 00:07:44.206 06:46:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.206 06:46:48 -- accel/accel.sh@20 -- # IFS=: 00:07:44.206 06:46:48 -- accel/accel.sh@20 -- # read -r var val 00:07:44.206 06:46:48 -- accel/accel.sh@21 -- # val= 00:07:44.206 06:46:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.206 06:46:48 -- accel/accel.sh@20 -- # IFS=: 00:07:44.206 06:46:48 -- accel/accel.sh@20 -- # read -r var val 00:07:44.206 06:46:48 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:44.206 06:46:48 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:44.206 06:46:48 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:44.206 00:07:44.206 real 0m2.630s 00:07:44.206 user 0m2.290s 00:07:44.206 sys 0m0.139s 00:07:44.206 06:46:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:44.206 06:46:48 -- common/autotest_common.sh@10 -- # set +x 00:07:44.206 ************************************ 00:07:44.206 END TEST accel_decomp 00:07:44.206 ************************************ 00:07:44.467 06:46:48 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:44.467 06:46:48 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:44.467 06:46:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:44.467 06:46:48 -- common/autotest_common.sh@10 -- # set +x 00:07:44.467 ************************************ 00:07:44.467 START TEST accel_decmop_full 00:07:44.467 ************************************ 00:07:44.467 06:46:48 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:44.467 06:46:48 -- accel/accel.sh@16 -- # local accel_opc 00:07:44.467 06:46:48 -- accel/accel.sh@17 -- # local accel_module 00:07:44.467 06:46:48 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:44.467 06:46:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:44.467 06:46:48 -- accel/accel.sh@12 -- # build_accel_config 00:07:44.467 06:46:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:44.467 06:46:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:44.467 06:46:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:44.467 06:46:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:44.467 06:46:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:44.467 06:46:48 -- accel/accel.sh@41 -- # local IFS=, 00:07:44.467 06:46:48 -- accel/accel.sh@42 -- # jq -r . 00:07:44.467 [2024-12-13 06:46:48.777777] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:44.468 [2024-12-13 06:46:48.777881] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68952 ] 00:07:44.468 [2024-12-13 06:46:48.914943] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.468 [2024-12-13 06:46:48.945278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.865 06:46:50 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:45.865 00:07:45.865 SPDK Configuration: 00:07:45.865 Core mask: 0x1 00:07:45.865 00:07:45.865 Accel Perf Configuration: 00:07:45.865 Workload Type: decompress 00:07:45.865 Transfer size: 111250 bytes 00:07:45.865 Vector count 1 00:07:45.865 Module: software 00:07:45.865 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:45.865 Queue depth: 32 00:07:45.865 Allocate depth: 32 00:07:45.865 # threads/core: 1 00:07:45.865 Run time: 1 seconds 00:07:45.865 Verify: Yes 00:07:45.865 00:07:45.865 Running for 1 seconds... 00:07:45.865 00:07:45.865 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:45.865 ------------------------------------------------------------------------------------ 00:07:45.865 0,0 5280/s 218 MiB/s 0 0 00:07:45.865 ==================================================================================== 00:07:45.865 Total 5280/s 560 MiB/s 0 0' 00:07:45.865 06:46:50 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:45.865 06:46:50 -- accel/accel.sh@20 -- # IFS=: 00:07:45.865 06:46:50 -- accel/accel.sh@20 -- # read -r var val 00:07:45.865 06:46:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:45.865 06:46:50 -- accel/accel.sh@12 -- # build_accel_config 00:07:45.865 06:46:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:45.865 06:46:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:45.865 06:46:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:45.865 06:46:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:45.865 06:46:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:45.865 06:46:50 -- accel/accel.sh@41 -- # local IFS=, 00:07:45.865 06:46:50 -- accel/accel.sh@42 -- # jq -r . 00:07:45.865 [2024-12-13 06:46:50.101954] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:45.865 [2024-12-13 06:46:50.102494] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68966 ] 00:07:45.865 [2024-12-13 06:46:50.238438] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.865 [2024-12-13 06:46:50.268584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.865 06:46:50 -- accel/accel.sh@21 -- # val= 00:07:45.865 06:46:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.865 06:46:50 -- accel/accel.sh@20 -- # IFS=: 00:07:45.865 06:46:50 -- accel/accel.sh@20 -- # read -r var val 00:07:45.865 06:46:50 -- accel/accel.sh@21 -- # val= 00:07:45.865 06:46:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.865 06:46:50 -- accel/accel.sh@20 -- # IFS=: 00:07:45.865 06:46:50 -- accel/accel.sh@20 -- # read -r var val 00:07:45.865 06:46:50 -- accel/accel.sh@21 -- # val= 00:07:45.865 06:46:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.865 06:46:50 -- accel/accel.sh@20 -- # IFS=: 00:07:45.865 06:46:50 -- accel/accel.sh@20 -- # read -r var val 00:07:45.865 06:46:50 -- accel/accel.sh@21 -- # val=0x1 00:07:45.865 06:46:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.865 06:46:50 -- accel/accel.sh@20 -- # IFS=: 00:07:45.865 06:46:50 -- accel/accel.sh@20 -- # read -r var val 00:07:45.865 06:46:50 -- accel/accel.sh@21 -- # val= 00:07:45.865 06:46:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.865 06:46:50 -- accel/accel.sh@20 -- # IFS=: 00:07:45.865 06:46:50 -- accel/accel.sh@20 -- # read -r var val 00:07:45.865 06:46:50 -- accel/accel.sh@21 -- # val= 00:07:45.865 06:46:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.865 06:46:50 -- accel/accel.sh@20 -- # IFS=: 00:07:45.865 06:46:50 -- accel/accel.sh@20 -- # read -r var val 00:07:45.865 06:46:50 -- accel/accel.sh@21 -- # val=decompress 00:07:45.865 06:46:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.865 06:46:50 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:45.865 06:46:50 -- accel/accel.sh@20 -- # IFS=: 00:07:45.865 06:46:50 -- accel/accel.sh@20 -- # read -r var val 00:07:45.865 06:46:50 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:45.865 06:46:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.865 06:46:50 -- accel/accel.sh@20 -- # IFS=: 00:07:45.865 06:46:50 -- accel/accel.sh@20 -- # read -r var val 00:07:45.865 06:46:50 -- accel/accel.sh@21 -- # val= 00:07:45.865 06:46:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.865 06:46:50 -- accel/accel.sh@20 -- # IFS=: 00:07:45.865 06:46:50 -- accel/accel.sh@20 -- # read -r var val 00:07:45.865 06:46:50 -- accel/accel.sh@21 -- # val=software 00:07:45.865 06:46:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.866 06:46:50 -- accel/accel.sh@23 -- # accel_module=software 00:07:45.866 06:46:50 -- accel/accel.sh@20 -- # IFS=: 00:07:45.866 06:46:50 -- accel/accel.sh@20 -- # read -r var val 00:07:45.866 06:46:50 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:45.866 06:46:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.866 06:46:50 -- accel/accel.sh@20 -- # IFS=: 00:07:45.866 06:46:50 -- accel/accel.sh@20 -- # read -r var val 00:07:45.866 06:46:50 -- accel/accel.sh@21 -- # val=32 00:07:45.866 06:46:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.866 06:46:50 -- accel/accel.sh@20 -- # IFS=: 00:07:45.866 06:46:50 -- accel/accel.sh@20 -- # read -r var val 00:07:45.866 06:46:50 -- accel/accel.sh@21 -- # val=32 00:07:45.866 06:46:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.866 06:46:50 -- accel/accel.sh@20 -- # IFS=: 00:07:45.866 06:46:50 -- accel/accel.sh@20 -- # read -r var val 00:07:45.866 06:46:50 -- accel/accel.sh@21 -- # val=1 00:07:45.866 06:46:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.866 06:46:50 -- accel/accel.sh@20 -- # IFS=: 00:07:45.866 06:46:50 -- accel/accel.sh@20 -- # read -r var val 00:07:45.866 06:46:50 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:45.866 06:46:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.866 06:46:50 -- accel/accel.sh@20 -- # IFS=: 00:07:45.866 06:46:50 -- accel/accel.sh@20 -- # read -r var val 00:07:45.866 06:46:50 -- accel/accel.sh@21 -- # val=Yes 00:07:45.866 06:46:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.866 06:46:50 -- accel/accel.sh@20 -- # IFS=: 00:07:45.866 06:46:50 -- accel/accel.sh@20 -- # read -r var val 00:07:45.866 06:46:50 -- accel/accel.sh@21 -- # val= 00:07:45.866 06:46:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.866 06:46:50 -- accel/accel.sh@20 -- # IFS=: 00:07:45.866 06:46:50 -- accel/accel.sh@20 -- # read -r var val 00:07:45.866 06:46:50 -- accel/accel.sh@21 -- # val= 00:07:45.866 06:46:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.866 06:46:50 -- accel/accel.sh@20 -- # IFS=: 00:07:45.866 06:46:50 -- accel/accel.sh@20 -- # read -r var val 00:07:47.244 06:46:51 -- accel/accel.sh@21 -- # val= 00:07:47.244 06:46:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.244 06:46:51 -- accel/accel.sh@20 -- # IFS=: 00:07:47.244 06:46:51 -- accel/accel.sh@20 -- # read -r var val 00:07:47.244 06:46:51 -- accel/accel.sh@21 -- # val= 00:07:47.244 06:46:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.244 06:46:51 -- accel/accel.sh@20 -- # IFS=: 00:07:47.244 06:46:51 -- accel/accel.sh@20 -- # read -r var val 00:07:47.244 06:46:51 -- accel/accel.sh@21 -- # val= 00:07:47.244 06:46:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.244 06:46:51 -- accel/accel.sh@20 -- # IFS=: 00:07:47.244 06:46:51 -- accel/accel.sh@20 -- # read -r var val 00:07:47.244 06:46:51 -- accel/accel.sh@21 -- # val= 00:07:47.244 06:46:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.244 06:46:51 -- accel/accel.sh@20 -- # IFS=: 00:07:47.244 06:46:51 -- accel/accel.sh@20 -- # read -r var val 00:07:47.244 06:46:51 -- accel/accel.sh@21 -- # val= 00:07:47.244 06:46:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.244 06:46:51 -- accel/accel.sh@20 -- # IFS=: 00:07:47.244 06:46:51 -- accel/accel.sh@20 -- # read -r var val 00:07:47.244 06:46:51 -- accel/accel.sh@21 -- # val= 00:07:47.244 06:46:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.244 06:46:51 -- accel/accel.sh@20 -- # IFS=: 00:07:47.244 06:46:51 -- accel/accel.sh@20 -- # read -r var val 00:07:47.244 06:46:51 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:47.244 06:46:51 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:47.244 06:46:51 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:47.244 00:07:47.244 real 0m2.651s 00:07:47.244 user 0m2.298s 00:07:47.244 sys 0m0.148s 00:07:47.244 06:46:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:47.244 06:46:51 -- common/autotest_common.sh@10 -- # set +x 00:07:47.244 ************************************ 00:07:47.244 END TEST accel_decmop_full 00:07:47.244 ************************************ 00:07:47.244 06:46:51 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:47.244 06:46:51 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:47.244 06:46:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:47.244 06:46:51 -- common/autotest_common.sh@10 -- # set +x 00:07:47.244 ************************************ 00:07:47.244 START TEST accel_decomp_mcore 00:07:47.244 ************************************ 00:07:47.244 06:46:51 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:47.244 06:46:51 -- accel/accel.sh@16 -- # local accel_opc 00:07:47.244 06:46:51 -- accel/accel.sh@17 -- # local accel_module 00:07:47.244 06:46:51 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:47.244 06:46:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:47.244 06:46:51 -- accel/accel.sh@12 -- # build_accel_config 00:07:47.244 06:46:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:47.244 06:46:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:47.244 06:46:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:47.244 06:46:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:47.244 06:46:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:47.244 06:46:51 -- accel/accel.sh@41 -- # local IFS=, 00:07:47.244 06:46:51 -- accel/accel.sh@42 -- # jq -r . 00:07:47.244 [2024-12-13 06:46:51.472773] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:47.244 [2024-12-13 06:46:51.472871] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69005 ] 00:07:47.244 [2024-12-13 06:46:51.608792] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:47.244 [2024-12-13 06:46:51.640948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:47.244 [2024-12-13 06:46:51.641090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:47.244 [2024-12-13 06:46:51.641218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:47.244 [2024-12-13 06:46:51.641435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.621 06:46:52 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:48.621 00:07:48.621 SPDK Configuration: 00:07:48.621 Core mask: 0xf 00:07:48.621 00:07:48.621 Accel Perf Configuration: 00:07:48.621 Workload Type: decompress 00:07:48.621 Transfer size: 4096 bytes 00:07:48.621 Vector count 1 00:07:48.621 Module: software 00:07:48.621 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:48.621 Queue depth: 32 00:07:48.621 Allocate depth: 32 00:07:48.621 # threads/core: 1 00:07:48.621 Run time: 1 seconds 00:07:48.621 Verify: Yes 00:07:48.621 00:07:48.621 Running for 1 seconds... 00:07:48.621 00:07:48.621 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:48.621 ------------------------------------------------------------------------------------ 00:07:48.621 0,0 64608/s 119 MiB/s 0 0 00:07:48.621 3,0 60992/s 112 MiB/s 0 0 00:07:48.621 2,0 61536/s 113 MiB/s 0 0 00:07:48.621 1,0 60384/s 111 MiB/s 0 0 00:07:48.621 ==================================================================================== 00:07:48.621 Total 247520/s 966 MiB/s 0 0' 00:07:48.621 06:46:52 -- accel/accel.sh@20 -- # IFS=: 00:07:48.621 06:46:52 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:48.621 06:46:52 -- accel/accel.sh@20 -- # read -r var val 00:07:48.621 06:46:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:48.621 06:46:52 -- accel/accel.sh@12 -- # build_accel_config 00:07:48.621 06:46:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:48.621 06:46:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:48.621 06:46:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:48.621 06:46:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:48.621 06:46:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:48.621 06:46:52 -- accel/accel.sh@41 -- # local IFS=, 00:07:48.621 06:46:52 -- accel/accel.sh@42 -- # jq -r . 00:07:48.621 [2024-12-13 06:46:52.796489] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:48.621 [2024-12-13 06:46:52.796758] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69023 ] 00:07:48.621 [2024-12-13 06:46:52.933316] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:48.621 [2024-12-13 06:46:52.965661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:48.621 [2024-12-13 06:46:52.965776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:48.621 [2024-12-13 06:46:52.965886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.621 [2024-12-13 06:46:52.965886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:48.621 06:46:52 -- accel/accel.sh@21 -- # val= 00:07:48.621 06:46:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.621 06:46:52 -- accel/accel.sh@20 -- # IFS=: 00:07:48.621 06:46:52 -- accel/accel.sh@20 -- # read -r var val 00:07:48.621 06:46:52 -- accel/accel.sh@21 -- # val= 00:07:48.622 06:46:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.622 06:46:52 -- accel/accel.sh@20 -- # IFS=: 00:07:48.622 06:46:52 -- accel/accel.sh@20 -- # read -r var val 00:07:48.622 06:46:52 -- accel/accel.sh@21 -- # val= 00:07:48.622 06:46:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.622 06:46:52 -- accel/accel.sh@20 -- # IFS=: 00:07:48.622 06:46:52 -- accel/accel.sh@20 -- # read -r var val 00:07:48.622 06:46:52 -- accel/accel.sh@21 -- # val=0xf 00:07:48.622 06:46:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.622 06:46:53 -- accel/accel.sh@20 -- # IFS=: 00:07:48.622 06:46:53 -- accel/accel.sh@20 -- # read -r var val 00:07:48.622 06:46:53 -- accel/accel.sh@21 -- # val= 00:07:48.622 06:46:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.622 06:46:53 -- accel/accel.sh@20 -- # IFS=: 00:07:48.622 06:46:53 -- accel/accel.sh@20 -- # read -r var val 00:07:48.622 06:46:53 -- accel/accel.sh@21 -- # val= 00:07:48.622 06:46:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.622 06:46:53 -- accel/accel.sh@20 -- # IFS=: 00:07:48.622 06:46:53 -- accel/accel.sh@20 -- # read -r var val 00:07:48.622 06:46:53 -- accel/accel.sh@21 -- # val=decompress 00:07:48.622 06:46:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.622 06:46:53 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:48.622 06:46:53 -- accel/accel.sh@20 -- # IFS=: 00:07:48.622 06:46:53 -- accel/accel.sh@20 -- # read -r var val 00:07:48.622 06:46:53 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:48.622 06:46:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.622 06:46:53 -- accel/accel.sh@20 -- # IFS=: 00:07:48.622 06:46:53 -- accel/accel.sh@20 -- # read -r var val 00:07:48.622 06:46:53 -- accel/accel.sh@21 -- # val= 00:07:48.622 06:46:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.622 06:46:53 -- accel/accel.sh@20 -- # IFS=: 00:07:48.622 06:46:53 -- accel/accel.sh@20 -- # read -r var val 00:07:48.622 06:46:53 -- accel/accel.sh@21 -- # val=software 00:07:48.622 06:46:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.622 06:46:53 -- accel/accel.sh@23 -- # accel_module=software 00:07:48.622 06:46:53 -- accel/accel.sh@20 -- # IFS=: 00:07:48.622 06:46:53 -- accel/accel.sh@20 -- # read -r var val 00:07:48.622 06:46:53 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:48.622 06:46:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.622 06:46:53 -- accel/accel.sh@20 -- # IFS=: 00:07:48.622 06:46:53 -- accel/accel.sh@20 -- # read -r var val 00:07:48.622 06:46:53 -- accel/accel.sh@21 -- # val=32 00:07:48.622 06:46:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.622 06:46:53 -- accel/accel.sh@20 -- # IFS=: 00:07:48.622 06:46:53 -- accel/accel.sh@20 -- # read -r var val 00:07:48.622 06:46:53 -- accel/accel.sh@21 -- # val=32 00:07:48.622 06:46:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.622 06:46:53 -- accel/accel.sh@20 -- # IFS=: 00:07:48.622 06:46:53 -- accel/accel.sh@20 -- # read -r var val 00:07:48.622 06:46:53 -- accel/accel.sh@21 -- # val=1 00:07:48.622 06:46:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.622 06:46:53 -- accel/accel.sh@20 -- # IFS=: 00:07:48.622 06:46:53 -- accel/accel.sh@20 -- # read -r var val 00:07:48.622 06:46:53 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:48.622 06:46:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.622 06:46:53 -- accel/accel.sh@20 -- # IFS=: 00:07:48.622 06:46:53 -- accel/accel.sh@20 -- # read -r var val 00:07:48.622 06:46:53 -- accel/accel.sh@21 -- # val=Yes 00:07:48.622 06:46:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.622 06:46:53 -- accel/accel.sh@20 -- # IFS=: 00:07:48.622 06:46:53 -- accel/accel.sh@20 -- # read -r var val 00:07:48.622 06:46:53 -- accel/accel.sh@21 -- # val= 00:07:48.622 06:46:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.622 06:46:53 -- accel/accel.sh@20 -- # IFS=: 00:07:48.622 06:46:53 -- accel/accel.sh@20 -- # read -r var val 00:07:48.622 06:46:53 -- accel/accel.sh@21 -- # val= 00:07:48.622 06:46:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.622 06:46:53 -- accel/accel.sh@20 -- # IFS=: 00:07:48.622 06:46:53 -- accel/accel.sh@20 -- # read -r var val 00:07:49.998 06:46:54 -- accel/accel.sh@21 -- # val= 00:07:49.998 06:46:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.998 06:46:54 -- accel/accel.sh@20 -- # IFS=: 00:07:49.998 06:46:54 -- accel/accel.sh@20 -- # read -r var val 00:07:49.998 06:46:54 -- accel/accel.sh@21 -- # val= 00:07:49.998 06:46:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.998 06:46:54 -- accel/accel.sh@20 -- # IFS=: 00:07:49.998 06:46:54 -- accel/accel.sh@20 -- # read -r var val 00:07:49.998 06:46:54 -- accel/accel.sh@21 -- # val= 00:07:49.998 06:46:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.998 06:46:54 -- accel/accel.sh@20 -- # IFS=: 00:07:49.998 06:46:54 -- accel/accel.sh@20 -- # read -r var val 00:07:49.998 06:46:54 -- accel/accel.sh@21 -- # val= 00:07:49.998 06:46:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.998 06:46:54 -- accel/accel.sh@20 -- # IFS=: 00:07:49.998 06:46:54 -- accel/accel.sh@20 -- # read -r var val 00:07:49.998 06:46:54 -- accel/accel.sh@21 -- # val= 00:07:49.998 06:46:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.998 06:46:54 -- accel/accel.sh@20 -- # IFS=: 00:07:49.998 06:46:54 -- accel/accel.sh@20 -- # read -r var val 00:07:49.998 06:46:54 -- accel/accel.sh@21 -- # val= 00:07:49.998 06:46:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.998 06:46:54 -- accel/accel.sh@20 -- # IFS=: 00:07:49.998 06:46:54 -- accel/accel.sh@20 -- # read -r var val 00:07:49.998 06:46:54 -- accel/accel.sh@21 -- # val= 00:07:49.998 06:46:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.998 06:46:54 -- accel/accel.sh@20 -- # IFS=: 00:07:49.998 06:46:54 -- accel/accel.sh@20 -- # read -r var val 00:07:49.998 06:46:54 -- accel/accel.sh@21 -- # val= 00:07:49.998 06:46:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.998 06:46:54 -- accel/accel.sh@20 -- # IFS=: 00:07:49.998 06:46:54 -- accel/accel.sh@20 -- # read -r var val 00:07:49.998 06:46:54 -- accel/accel.sh@21 -- # val= 00:07:49.998 06:46:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.998 06:46:54 -- accel/accel.sh@20 -- # IFS=: 00:07:49.998 06:46:54 -- accel/accel.sh@20 -- # read -r var val 00:07:49.998 06:46:54 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:49.998 06:46:54 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:49.998 06:46:54 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:49.998 00:07:49.998 real 0m2.651s 00:07:49.998 user 0m8.712s 00:07:49.998 sys 0m0.159s 00:07:49.998 06:46:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:49.998 ************************************ 00:07:49.998 END TEST accel_decomp_mcore 00:07:49.998 ************************************ 00:07:49.998 06:46:54 -- common/autotest_common.sh@10 -- # set +x 00:07:49.998 06:46:54 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:49.998 06:46:54 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:49.998 06:46:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:49.998 06:46:54 -- common/autotest_common.sh@10 -- # set +x 00:07:49.998 ************************************ 00:07:49.998 START TEST accel_decomp_full_mcore 00:07:49.998 ************************************ 00:07:49.998 06:46:54 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:49.998 06:46:54 -- accel/accel.sh@16 -- # local accel_opc 00:07:49.998 06:46:54 -- accel/accel.sh@17 -- # local accel_module 00:07:49.998 06:46:54 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:49.998 06:46:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:49.998 06:46:54 -- accel/accel.sh@12 -- # build_accel_config 00:07:49.998 06:46:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:49.998 06:46:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:49.998 06:46:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:49.998 06:46:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:49.998 06:46:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:49.998 06:46:54 -- accel/accel.sh@41 -- # local IFS=, 00:07:49.998 06:46:54 -- accel/accel.sh@42 -- # jq -r . 00:07:49.998 [2024-12-13 06:46:54.172144] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:49.999 [2024-12-13 06:46:54.172232] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69055 ] 00:07:49.999 [2024-12-13 06:46:54.308775] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:49.999 [2024-12-13 06:46:54.341391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:49.999 [2024-12-13 06:46:54.341482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:49.999 [2024-12-13 06:46:54.341610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.999 [2024-12-13 06:46:54.341611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:51.371 06:46:55 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:51.371 00:07:51.371 SPDK Configuration: 00:07:51.371 Core mask: 0xf 00:07:51.371 00:07:51.371 Accel Perf Configuration: 00:07:51.371 Workload Type: decompress 00:07:51.371 Transfer size: 111250 bytes 00:07:51.371 Vector count 1 00:07:51.371 Module: software 00:07:51.371 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:51.371 Queue depth: 32 00:07:51.371 Allocate depth: 32 00:07:51.371 # threads/core: 1 00:07:51.371 Run time: 1 seconds 00:07:51.371 Verify: Yes 00:07:51.371 00:07:51.371 Running for 1 seconds... 00:07:51.371 00:07:51.371 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:51.371 ------------------------------------------------------------------------------------ 00:07:51.372 0,0 4832/s 199 MiB/s 0 0 00:07:51.372 3,0 4832/s 199 MiB/s 0 0 00:07:51.372 2,0 4864/s 200 MiB/s 0 0 00:07:51.372 1,0 4864/s 200 MiB/s 0 0 00:07:51.372 ==================================================================================== 00:07:51.372 Total 19392/s 2057 MiB/s 0 0' 00:07:51.372 06:46:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:51.372 06:46:55 -- accel/accel.sh@20 -- # IFS=: 00:07:51.372 06:46:55 -- accel/accel.sh@20 -- # read -r var val 00:07:51.372 06:46:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:51.372 06:46:55 -- accel/accel.sh@12 -- # build_accel_config 00:07:51.372 06:46:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:51.372 06:46:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:51.372 06:46:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:51.372 06:46:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:51.372 06:46:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:51.372 06:46:55 -- accel/accel.sh@41 -- # local IFS=, 00:07:51.372 06:46:55 -- accel/accel.sh@42 -- # jq -r . 00:07:51.372 [2024-12-13 06:46:55.504478] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:51.372 [2024-12-13 06:46:55.504563] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69078 ] 00:07:51.372 [2024-12-13 06:46:55.636024] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:51.372 [2024-12-13 06:46:55.672911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:51.372 [2024-12-13 06:46:55.673020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:51.372 [2024-12-13 06:46:55.673140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:51.372 [2024-12-13 06:46:55.673145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.372 06:46:55 -- accel/accel.sh@21 -- # val= 00:07:51.372 06:46:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.372 06:46:55 -- accel/accel.sh@20 -- # IFS=: 00:07:51.372 06:46:55 -- accel/accel.sh@20 -- # read -r var val 00:07:51.372 06:46:55 -- accel/accel.sh@21 -- # val= 00:07:51.372 06:46:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.372 06:46:55 -- accel/accel.sh@20 -- # IFS=: 00:07:51.372 06:46:55 -- accel/accel.sh@20 -- # read -r var val 00:07:51.372 06:46:55 -- accel/accel.sh@21 -- # val= 00:07:51.372 06:46:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.372 06:46:55 -- accel/accel.sh@20 -- # IFS=: 00:07:51.372 06:46:55 -- accel/accel.sh@20 -- # read -r var val 00:07:51.372 06:46:55 -- accel/accel.sh@21 -- # val=0xf 00:07:51.372 06:46:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.372 06:46:55 -- accel/accel.sh@20 -- # IFS=: 00:07:51.372 06:46:55 -- accel/accel.sh@20 -- # read -r var val 00:07:51.372 06:46:55 -- accel/accel.sh@21 -- # val= 00:07:51.372 06:46:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.372 06:46:55 -- accel/accel.sh@20 -- # IFS=: 00:07:51.372 06:46:55 -- accel/accel.sh@20 -- # read -r var val 00:07:51.372 06:46:55 -- accel/accel.sh@21 -- # val= 00:07:51.372 06:46:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.372 06:46:55 -- accel/accel.sh@20 -- # IFS=: 00:07:51.372 06:46:55 -- accel/accel.sh@20 -- # read -r var val 00:07:51.372 06:46:55 -- accel/accel.sh@21 -- # val=decompress 00:07:51.372 06:46:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.372 06:46:55 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:51.372 06:46:55 -- accel/accel.sh@20 -- # IFS=: 00:07:51.372 06:46:55 -- accel/accel.sh@20 -- # read -r var val 00:07:51.372 06:46:55 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:51.372 06:46:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.372 06:46:55 -- accel/accel.sh@20 -- # IFS=: 00:07:51.372 06:46:55 -- accel/accel.sh@20 -- # read -r var val 00:07:51.372 06:46:55 -- accel/accel.sh@21 -- # val= 00:07:51.372 06:46:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.372 06:46:55 -- accel/accel.sh@20 -- # IFS=: 00:07:51.372 06:46:55 -- accel/accel.sh@20 -- # read -r var val 00:07:51.372 06:46:55 -- accel/accel.sh@21 -- # val=software 00:07:51.372 06:46:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.372 06:46:55 -- accel/accel.sh@23 -- # accel_module=software 00:07:51.372 06:46:55 -- accel/accel.sh@20 -- # IFS=: 00:07:51.372 06:46:55 -- accel/accel.sh@20 -- # read -r var val 00:07:51.372 06:46:55 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:51.372 06:46:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.372 06:46:55 -- accel/accel.sh@20 -- # IFS=: 00:07:51.372 06:46:55 -- accel/accel.sh@20 -- # read -r var val 00:07:51.372 06:46:55 -- accel/accel.sh@21 -- # val=32 00:07:51.372 06:46:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.372 06:46:55 -- accel/accel.sh@20 -- # IFS=: 00:07:51.372 06:46:55 -- accel/accel.sh@20 -- # read -r var val 00:07:51.372 06:46:55 -- accel/accel.sh@21 -- # val=32 00:07:51.372 06:46:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.372 06:46:55 -- accel/accel.sh@20 -- # IFS=: 00:07:51.372 06:46:55 -- accel/accel.sh@20 -- # read -r var val 00:07:51.372 06:46:55 -- accel/accel.sh@21 -- # val=1 00:07:51.372 06:46:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.372 06:46:55 -- accel/accel.sh@20 -- # IFS=: 00:07:51.372 06:46:55 -- accel/accel.sh@20 -- # read -r var val 00:07:51.372 06:46:55 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:51.372 06:46:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.372 06:46:55 -- accel/accel.sh@20 -- # IFS=: 00:07:51.372 06:46:55 -- accel/accel.sh@20 -- # read -r var val 00:07:51.372 06:46:55 -- accel/accel.sh@21 -- # val=Yes 00:07:51.372 06:46:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.372 06:46:55 -- accel/accel.sh@20 -- # IFS=: 00:07:51.372 06:46:55 -- accel/accel.sh@20 -- # read -r var val 00:07:51.372 06:46:55 -- accel/accel.sh@21 -- # val= 00:07:51.372 06:46:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.372 06:46:55 -- accel/accel.sh@20 -- # IFS=: 00:07:51.372 06:46:55 -- accel/accel.sh@20 -- # read -r var val 00:07:51.372 06:46:55 -- accel/accel.sh@21 -- # val= 00:07:51.372 06:46:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.372 06:46:55 -- accel/accel.sh@20 -- # IFS=: 00:07:51.372 06:46:55 -- accel/accel.sh@20 -- # read -r var val 00:07:52.305 06:46:56 -- accel/accel.sh@21 -- # val= 00:07:52.305 06:46:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.305 06:46:56 -- accel/accel.sh@20 -- # IFS=: 00:07:52.305 06:46:56 -- accel/accel.sh@20 -- # read -r var val 00:07:52.305 06:46:56 -- accel/accel.sh@21 -- # val= 00:07:52.305 06:46:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.305 06:46:56 -- accel/accel.sh@20 -- # IFS=: 00:07:52.305 06:46:56 -- accel/accel.sh@20 -- # read -r var val 00:07:52.305 06:46:56 -- accel/accel.sh@21 -- # val= 00:07:52.305 06:46:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.305 06:46:56 -- accel/accel.sh@20 -- # IFS=: 00:07:52.305 06:46:56 -- accel/accel.sh@20 -- # read -r var val 00:07:52.305 06:46:56 -- accel/accel.sh@21 -- # val= 00:07:52.305 06:46:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.305 06:46:56 -- accel/accel.sh@20 -- # IFS=: 00:07:52.305 06:46:56 -- accel/accel.sh@20 -- # read -r var val 00:07:52.305 06:46:56 -- accel/accel.sh@21 -- # val= 00:07:52.305 06:46:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.305 06:46:56 -- accel/accel.sh@20 -- # IFS=: 00:07:52.305 06:46:56 -- accel/accel.sh@20 -- # read -r var val 00:07:52.306 06:46:56 -- accel/accel.sh@21 -- # val= 00:07:52.306 06:46:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.306 06:46:56 -- accel/accel.sh@20 -- # IFS=: 00:07:52.306 06:46:56 -- accel/accel.sh@20 -- # read -r var val 00:07:52.306 06:46:56 -- accel/accel.sh@21 -- # val= 00:07:52.565 06:46:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.565 06:46:56 -- accel/accel.sh@20 -- # IFS=: 00:07:52.565 06:46:56 -- accel/accel.sh@20 -- # read -r var val 00:07:52.565 06:46:56 -- accel/accel.sh@21 -- # val= 00:07:52.565 06:46:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.565 06:46:56 -- accel/accel.sh@20 -- # IFS=: 00:07:52.565 06:46:56 -- accel/accel.sh@20 -- # read -r var val 00:07:52.565 06:46:56 -- accel/accel.sh@21 -- # val= 00:07:52.565 06:46:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.565 06:46:56 -- accel/accel.sh@20 -- # IFS=: 00:07:52.565 06:46:56 -- accel/accel.sh@20 -- # read -r var val 00:07:52.565 06:46:56 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:52.565 06:46:56 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:52.565 06:46:56 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:52.565 00:07:52.565 real 0m2.679s 00:07:52.565 user 0m8.814s 00:07:52.565 sys 0m0.167s 00:07:52.565 06:46:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:52.565 06:46:56 -- common/autotest_common.sh@10 -- # set +x 00:07:52.565 ************************************ 00:07:52.565 END TEST accel_decomp_full_mcore 00:07:52.565 ************************************ 00:07:52.565 06:46:56 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:52.565 06:46:56 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:52.565 06:46:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:52.565 06:46:56 -- common/autotest_common.sh@10 -- # set +x 00:07:52.565 ************************************ 00:07:52.565 START TEST accel_decomp_mthread 00:07:52.565 ************************************ 00:07:52.565 06:46:56 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:52.565 06:46:56 -- accel/accel.sh@16 -- # local accel_opc 00:07:52.565 06:46:56 -- accel/accel.sh@17 -- # local accel_module 00:07:52.565 06:46:56 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:52.565 06:46:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:52.565 06:46:56 -- accel/accel.sh@12 -- # build_accel_config 00:07:52.565 06:46:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:52.565 06:46:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:52.565 06:46:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:52.565 06:46:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:52.565 06:46:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:52.565 06:46:56 -- accel/accel.sh@41 -- # local IFS=, 00:07:52.565 06:46:56 -- accel/accel.sh@42 -- # jq -r . 00:07:52.565 [2024-12-13 06:46:56.905792] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:52.565 [2024-12-13 06:46:56.905879] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69115 ] 00:07:52.565 [2024-12-13 06:46:57.043861] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.565 [2024-12-13 06:46:57.074196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.943 06:46:58 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:53.943 00:07:53.943 SPDK Configuration: 00:07:53.943 Core mask: 0x1 00:07:53.943 00:07:53.943 Accel Perf Configuration: 00:07:53.943 Workload Type: decompress 00:07:53.943 Transfer size: 4096 bytes 00:07:53.943 Vector count 1 00:07:53.943 Module: software 00:07:53.943 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:53.943 Queue depth: 32 00:07:53.943 Allocate depth: 32 00:07:53.943 # threads/core: 2 00:07:53.943 Run time: 1 seconds 00:07:53.943 Verify: Yes 00:07:53.943 00:07:53.943 Running for 1 seconds... 00:07:53.943 00:07:53.943 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:53.943 ------------------------------------------------------------------------------------ 00:07:53.943 0,1 40160/s 74 MiB/s 0 0 00:07:53.943 0,0 40000/s 73 MiB/s 0 0 00:07:53.943 ==================================================================================== 00:07:53.943 Total 80160/s 313 MiB/s 0 0' 00:07:53.943 06:46:58 -- accel/accel.sh@20 -- # IFS=: 00:07:53.943 06:46:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:53.943 06:46:58 -- accel/accel.sh@20 -- # read -r var val 00:07:53.943 06:46:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:53.943 06:46:58 -- accel/accel.sh@12 -- # build_accel_config 00:07:53.943 06:46:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:53.943 06:46:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:53.943 06:46:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:53.943 06:46:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:53.943 06:46:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:53.943 06:46:58 -- accel/accel.sh@41 -- # local IFS=, 00:07:53.943 06:46:58 -- accel/accel.sh@42 -- # jq -r . 00:07:53.943 [2024-12-13 06:46:58.220794] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:53.943 [2024-12-13 06:46:58.220881] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69129 ] 00:07:53.943 [2024-12-13 06:46:58.356916] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.943 [2024-12-13 06:46:58.386942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.943 06:46:58 -- accel/accel.sh@21 -- # val= 00:07:53.943 06:46:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.943 06:46:58 -- accel/accel.sh@20 -- # IFS=: 00:07:53.943 06:46:58 -- accel/accel.sh@20 -- # read -r var val 00:07:53.943 06:46:58 -- accel/accel.sh@21 -- # val= 00:07:53.943 06:46:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.943 06:46:58 -- accel/accel.sh@20 -- # IFS=: 00:07:53.943 06:46:58 -- accel/accel.sh@20 -- # read -r var val 00:07:53.943 06:46:58 -- accel/accel.sh@21 -- # val= 00:07:53.943 06:46:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.943 06:46:58 -- accel/accel.sh@20 -- # IFS=: 00:07:53.943 06:46:58 -- accel/accel.sh@20 -- # read -r var val 00:07:53.943 06:46:58 -- accel/accel.sh@21 -- # val=0x1 00:07:53.943 06:46:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.943 06:46:58 -- accel/accel.sh@20 -- # IFS=: 00:07:53.943 06:46:58 -- accel/accel.sh@20 -- # read -r var val 00:07:53.943 06:46:58 -- accel/accel.sh@21 -- # val= 00:07:53.943 06:46:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.943 06:46:58 -- accel/accel.sh@20 -- # IFS=: 00:07:53.943 06:46:58 -- accel/accel.sh@20 -- # read -r var val 00:07:53.943 06:46:58 -- accel/accel.sh@21 -- # val= 00:07:53.943 06:46:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.943 06:46:58 -- accel/accel.sh@20 -- # IFS=: 00:07:53.943 06:46:58 -- accel/accel.sh@20 -- # read -r var val 00:07:53.943 06:46:58 -- accel/accel.sh@21 -- # val=decompress 00:07:53.944 06:46:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.944 06:46:58 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:53.944 06:46:58 -- accel/accel.sh@20 -- # IFS=: 00:07:53.944 06:46:58 -- accel/accel.sh@20 -- # read -r var val 00:07:53.944 06:46:58 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:53.944 06:46:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.944 06:46:58 -- accel/accel.sh@20 -- # IFS=: 00:07:53.944 06:46:58 -- accel/accel.sh@20 -- # read -r var val 00:07:53.944 06:46:58 -- accel/accel.sh@21 -- # val= 00:07:53.944 06:46:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.944 06:46:58 -- accel/accel.sh@20 -- # IFS=: 00:07:53.944 06:46:58 -- accel/accel.sh@20 -- # read -r var val 00:07:53.944 06:46:58 -- accel/accel.sh@21 -- # val=software 00:07:53.944 06:46:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.944 06:46:58 -- accel/accel.sh@23 -- # accel_module=software 00:07:53.944 06:46:58 -- accel/accel.sh@20 -- # IFS=: 00:07:53.944 06:46:58 -- accel/accel.sh@20 -- # read -r var val 00:07:53.944 06:46:58 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:53.944 06:46:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.944 06:46:58 -- accel/accel.sh@20 -- # IFS=: 00:07:53.944 06:46:58 -- accel/accel.sh@20 -- # read -r var val 00:07:53.944 06:46:58 -- accel/accel.sh@21 -- # val=32 00:07:53.944 06:46:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.944 06:46:58 -- accel/accel.sh@20 -- # IFS=: 00:07:53.944 06:46:58 -- accel/accel.sh@20 -- # read -r var val 00:07:53.944 06:46:58 -- accel/accel.sh@21 -- # val=32 00:07:53.944 06:46:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.944 06:46:58 -- accel/accel.sh@20 -- # IFS=: 00:07:53.944 06:46:58 -- accel/accel.sh@20 -- # read -r var val 00:07:53.944 06:46:58 -- accel/accel.sh@21 -- # val=2 00:07:53.944 06:46:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.944 06:46:58 -- accel/accel.sh@20 -- # IFS=: 00:07:53.944 06:46:58 -- accel/accel.sh@20 -- # read -r var val 00:07:53.944 06:46:58 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:53.944 06:46:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.944 06:46:58 -- accel/accel.sh@20 -- # IFS=: 00:07:53.944 06:46:58 -- accel/accel.sh@20 -- # read -r var val 00:07:53.944 06:46:58 -- accel/accel.sh@21 -- # val=Yes 00:07:53.944 06:46:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.944 06:46:58 -- accel/accel.sh@20 -- # IFS=: 00:07:53.944 06:46:58 -- accel/accel.sh@20 -- # read -r var val 00:07:53.944 06:46:58 -- accel/accel.sh@21 -- # val= 00:07:53.944 06:46:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.944 06:46:58 -- accel/accel.sh@20 -- # IFS=: 00:07:53.944 06:46:58 -- accel/accel.sh@20 -- # read -r var val 00:07:53.944 06:46:58 -- accel/accel.sh@21 -- # val= 00:07:53.944 06:46:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.944 06:46:58 -- accel/accel.sh@20 -- # IFS=: 00:07:53.944 06:46:58 -- accel/accel.sh@20 -- # read -r var val 00:07:55.322 06:46:59 -- accel/accel.sh@21 -- # val= 00:07:55.322 06:46:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.322 06:46:59 -- accel/accel.sh@20 -- # IFS=: 00:07:55.322 06:46:59 -- accel/accel.sh@20 -- # read -r var val 00:07:55.322 06:46:59 -- accel/accel.sh@21 -- # val= 00:07:55.322 06:46:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.322 06:46:59 -- accel/accel.sh@20 -- # IFS=: 00:07:55.322 06:46:59 -- accel/accel.sh@20 -- # read -r var val 00:07:55.322 06:46:59 -- accel/accel.sh@21 -- # val= 00:07:55.322 06:46:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.322 06:46:59 -- accel/accel.sh@20 -- # IFS=: 00:07:55.322 06:46:59 -- accel/accel.sh@20 -- # read -r var val 00:07:55.322 06:46:59 -- accel/accel.sh@21 -- # val= 00:07:55.322 06:46:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.322 06:46:59 -- accel/accel.sh@20 -- # IFS=: 00:07:55.322 06:46:59 -- accel/accel.sh@20 -- # read -r var val 00:07:55.322 06:46:59 -- accel/accel.sh@21 -- # val= 00:07:55.322 06:46:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.322 06:46:59 -- accel/accel.sh@20 -- # IFS=: 00:07:55.322 06:46:59 -- accel/accel.sh@20 -- # read -r var val 00:07:55.322 06:46:59 -- accel/accel.sh@21 -- # val= 00:07:55.322 06:46:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.322 06:46:59 -- accel/accel.sh@20 -- # IFS=: 00:07:55.323 06:46:59 -- accel/accel.sh@20 -- # read -r var val 00:07:55.323 06:46:59 -- accel/accel.sh@21 -- # val= 00:07:55.323 06:46:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.323 06:46:59 -- accel/accel.sh@20 -- # IFS=: 00:07:55.323 06:46:59 -- accel/accel.sh@20 -- # read -r var val 00:07:55.323 06:46:59 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:55.323 06:46:59 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:55.323 06:46:59 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:55.323 00:07:55.323 real 0m2.632s 00:07:55.323 user 0m2.284s 00:07:55.323 sys 0m0.149s 00:07:55.323 06:46:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:55.323 ************************************ 00:07:55.323 END TEST accel_decomp_mthread 00:07:55.323 ************************************ 00:07:55.323 06:46:59 -- common/autotest_common.sh@10 -- # set +x 00:07:55.323 06:46:59 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:55.323 06:46:59 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:55.323 06:46:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:55.323 06:46:59 -- common/autotest_common.sh@10 -- # set +x 00:07:55.323 ************************************ 00:07:55.323 START TEST accel_deomp_full_mthread 00:07:55.323 ************************************ 00:07:55.323 06:46:59 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:55.323 06:46:59 -- accel/accel.sh@16 -- # local accel_opc 00:07:55.323 06:46:59 -- accel/accel.sh@17 -- # local accel_module 00:07:55.323 06:46:59 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:55.323 06:46:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:55.323 06:46:59 -- accel/accel.sh@12 -- # build_accel_config 00:07:55.323 06:46:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:55.323 06:46:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:55.323 06:46:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:55.323 06:46:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:55.323 06:46:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:55.323 06:46:59 -- accel/accel.sh@41 -- # local IFS=, 00:07:55.323 06:46:59 -- accel/accel.sh@42 -- # jq -r . 00:07:55.323 [2024-12-13 06:46:59.584931] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:55.323 [2024-12-13 06:46:59.585028] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69164 ] 00:07:55.323 [2024-12-13 06:46:59.713601] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.323 [2024-12-13 06:46:59.743810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.700 06:47:00 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:56.700 00:07:56.700 SPDK Configuration: 00:07:56.700 Core mask: 0x1 00:07:56.700 00:07:56.700 Accel Perf Configuration: 00:07:56.700 Workload Type: decompress 00:07:56.700 Transfer size: 111250 bytes 00:07:56.700 Vector count 1 00:07:56.700 Module: software 00:07:56.700 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:56.700 Queue depth: 32 00:07:56.700 Allocate depth: 32 00:07:56.700 # threads/core: 2 00:07:56.700 Run time: 1 seconds 00:07:56.700 Verify: Yes 00:07:56.700 00:07:56.700 Running for 1 seconds... 00:07:56.700 00:07:56.700 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:56.700 ------------------------------------------------------------------------------------ 00:07:56.700 0,1 2720/s 112 MiB/s 0 0 00:07:56.700 0,0 2688/s 111 MiB/s 0 0 00:07:56.700 ==================================================================================== 00:07:56.700 Total 5408/s 573 MiB/s 0 0' 00:07:56.700 06:47:00 -- accel/accel.sh@20 -- # IFS=: 00:07:56.700 06:47:00 -- accel/accel.sh@20 -- # read -r var val 00:07:56.700 06:47:00 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:56.700 06:47:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:56.700 06:47:00 -- accel/accel.sh@12 -- # build_accel_config 00:07:56.700 06:47:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:56.700 06:47:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:56.700 06:47:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:56.700 06:47:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:56.700 06:47:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:56.700 06:47:00 -- accel/accel.sh@41 -- # local IFS=, 00:07:56.700 06:47:00 -- accel/accel.sh@42 -- # jq -r . 00:07:56.700 [2024-12-13 06:47:00.910838] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:56.700 [2024-12-13 06:47:00.910928] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69180 ] 00:07:56.700 [2024-12-13 06:47:01.047909] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.700 [2024-12-13 06:47:01.078083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.700 06:47:01 -- accel/accel.sh@21 -- # val= 00:07:56.700 06:47:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.700 06:47:01 -- accel/accel.sh@20 -- # IFS=: 00:07:56.700 06:47:01 -- accel/accel.sh@20 -- # read -r var val 00:07:56.700 06:47:01 -- accel/accel.sh@21 -- # val= 00:07:56.700 06:47:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.700 06:47:01 -- accel/accel.sh@20 -- # IFS=: 00:07:56.700 06:47:01 -- accel/accel.sh@20 -- # read -r var val 00:07:56.700 06:47:01 -- accel/accel.sh@21 -- # val= 00:07:56.700 06:47:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.700 06:47:01 -- accel/accel.sh@20 -- # IFS=: 00:07:56.701 06:47:01 -- accel/accel.sh@20 -- # read -r var val 00:07:56.701 06:47:01 -- accel/accel.sh@21 -- # val=0x1 00:07:56.701 06:47:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.701 06:47:01 -- accel/accel.sh@20 -- # IFS=: 00:07:56.701 06:47:01 -- accel/accel.sh@20 -- # read -r var val 00:07:56.701 06:47:01 -- accel/accel.sh@21 -- # val= 00:07:56.701 06:47:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.701 06:47:01 -- accel/accel.sh@20 -- # IFS=: 00:07:56.701 06:47:01 -- accel/accel.sh@20 -- # read -r var val 00:07:56.701 06:47:01 -- accel/accel.sh@21 -- # val= 00:07:56.701 06:47:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.701 06:47:01 -- accel/accel.sh@20 -- # IFS=: 00:07:56.701 06:47:01 -- accel/accel.sh@20 -- # read -r var val 00:07:56.701 06:47:01 -- accel/accel.sh@21 -- # val=decompress 00:07:56.701 06:47:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.701 06:47:01 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:56.701 06:47:01 -- accel/accel.sh@20 -- # IFS=: 00:07:56.701 06:47:01 -- accel/accel.sh@20 -- # read -r var val 00:07:56.701 06:47:01 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:56.701 06:47:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.701 06:47:01 -- accel/accel.sh@20 -- # IFS=: 00:07:56.701 06:47:01 -- accel/accel.sh@20 -- # read -r var val 00:07:56.701 06:47:01 -- accel/accel.sh@21 -- # val= 00:07:56.701 06:47:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.701 06:47:01 -- accel/accel.sh@20 -- # IFS=: 00:07:56.701 06:47:01 -- accel/accel.sh@20 -- # read -r var val 00:07:56.701 06:47:01 -- accel/accel.sh@21 -- # val=software 00:07:56.701 06:47:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.701 06:47:01 -- accel/accel.sh@23 -- # accel_module=software 00:07:56.701 06:47:01 -- accel/accel.sh@20 -- # IFS=: 00:07:56.701 06:47:01 -- accel/accel.sh@20 -- # read -r var val 00:07:56.701 06:47:01 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:56.701 06:47:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.701 06:47:01 -- accel/accel.sh@20 -- # IFS=: 00:07:56.701 06:47:01 -- accel/accel.sh@20 -- # read -r var val 00:07:56.701 06:47:01 -- accel/accel.sh@21 -- # val=32 00:07:56.701 06:47:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.701 06:47:01 -- accel/accel.sh@20 -- # IFS=: 00:07:56.701 06:47:01 -- accel/accel.sh@20 -- # read -r var val 00:07:56.701 06:47:01 -- accel/accel.sh@21 -- # val=32 00:07:56.701 06:47:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.701 06:47:01 -- accel/accel.sh@20 -- # IFS=: 00:07:56.701 06:47:01 -- accel/accel.sh@20 -- # read -r var val 00:07:56.701 06:47:01 -- accel/accel.sh@21 -- # val=2 00:07:56.701 06:47:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.701 06:47:01 -- accel/accel.sh@20 -- # IFS=: 00:07:56.701 06:47:01 -- accel/accel.sh@20 -- # read -r var val 00:07:56.701 06:47:01 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:56.701 06:47:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.701 06:47:01 -- accel/accel.sh@20 -- # IFS=: 00:07:56.701 06:47:01 -- accel/accel.sh@20 -- # read -r var val 00:07:56.701 06:47:01 -- accel/accel.sh@21 -- # val=Yes 00:07:56.701 06:47:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.701 06:47:01 -- accel/accel.sh@20 -- # IFS=: 00:07:56.701 06:47:01 -- accel/accel.sh@20 -- # read -r var val 00:07:56.701 06:47:01 -- accel/accel.sh@21 -- # val= 00:07:56.701 06:47:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.701 06:47:01 -- accel/accel.sh@20 -- # IFS=: 00:07:56.701 06:47:01 -- accel/accel.sh@20 -- # read -r var val 00:07:56.701 06:47:01 -- accel/accel.sh@21 -- # val= 00:07:56.701 06:47:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.701 06:47:01 -- accel/accel.sh@20 -- # IFS=: 00:07:56.701 06:47:01 -- accel/accel.sh@20 -- # read -r var val 00:07:58.079 06:47:02 -- accel/accel.sh@21 -- # val= 00:07:58.079 06:47:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.079 06:47:02 -- accel/accel.sh@20 -- # IFS=: 00:07:58.079 06:47:02 -- accel/accel.sh@20 -- # read -r var val 00:07:58.079 06:47:02 -- accel/accel.sh@21 -- # val= 00:07:58.079 06:47:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.079 06:47:02 -- accel/accel.sh@20 -- # IFS=: 00:07:58.079 06:47:02 -- accel/accel.sh@20 -- # read -r var val 00:07:58.080 06:47:02 -- accel/accel.sh@21 -- # val= 00:07:58.080 06:47:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.080 06:47:02 -- accel/accel.sh@20 -- # IFS=: 00:07:58.080 06:47:02 -- accel/accel.sh@20 -- # read -r var val 00:07:58.080 06:47:02 -- accel/accel.sh@21 -- # val= 00:07:58.080 06:47:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.080 06:47:02 -- accel/accel.sh@20 -- # IFS=: 00:07:58.080 06:47:02 -- accel/accel.sh@20 -- # read -r var val 00:07:58.080 06:47:02 -- accel/accel.sh@21 -- # val= 00:07:58.080 06:47:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.080 06:47:02 -- accel/accel.sh@20 -- # IFS=: 00:07:58.080 06:47:02 -- accel/accel.sh@20 -- # read -r var val 00:07:58.080 06:47:02 -- accel/accel.sh@21 -- # val= 00:07:58.080 06:47:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.080 06:47:02 -- accel/accel.sh@20 -- # IFS=: 00:07:58.080 06:47:02 -- accel/accel.sh@20 -- # read -r var val 00:07:58.080 06:47:02 -- accel/accel.sh@21 -- # val= 00:07:58.080 06:47:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.080 06:47:02 -- accel/accel.sh@20 -- # IFS=: 00:07:58.080 06:47:02 -- accel/accel.sh@20 -- # read -r var val 00:07:58.080 06:47:02 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:58.080 06:47:02 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:58.080 06:47:02 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:58.080 00:07:58.080 real 0m2.657s 00:07:58.080 user 0m2.318s 00:07:58.080 sys 0m0.137s 00:07:58.080 06:47:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:58.080 ************************************ 00:07:58.080 END TEST accel_deomp_full_mthread 00:07:58.080 ************************************ 00:07:58.080 06:47:02 -- common/autotest_common.sh@10 -- # set +x 00:07:58.080 06:47:02 -- accel/accel.sh@116 -- # [[ n == y ]] 00:07:58.080 06:47:02 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:58.080 06:47:02 -- accel/accel.sh@129 -- # build_accel_config 00:07:58.080 06:47:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:58.080 06:47:02 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:58.080 06:47:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:58.080 06:47:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:58.080 06:47:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:58.080 06:47:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:58.080 06:47:02 -- common/autotest_common.sh@10 -- # set +x 00:07:58.080 06:47:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:58.080 06:47:02 -- accel/accel.sh@41 -- # local IFS=, 00:07:58.080 06:47:02 -- accel/accel.sh@42 -- # jq -r . 00:07:58.080 ************************************ 00:07:58.080 START TEST accel_dif_functional_tests 00:07:58.080 ************************************ 00:07:58.080 06:47:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:58.080 [2024-12-13 06:47:02.320663] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:58.080 [2024-12-13 06:47:02.320773] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69213 ] 00:07:58.080 [2024-12-13 06:47:02.455265] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:58.080 [2024-12-13 06:47:02.487343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:58.080 [2024-12-13 06:47:02.487483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:58.080 [2024-12-13 06:47:02.487487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.080 00:07:58.080 00:07:58.080 CUnit - A unit testing framework for C - Version 2.1-3 00:07:58.080 http://cunit.sourceforge.net/ 00:07:58.080 00:07:58.080 00:07:58.080 Suite: accel_dif 00:07:58.080 Test: verify: DIF generated, GUARD check ...passed 00:07:58.080 Test: verify: DIF generated, APPTAG check ...passed 00:07:58.080 Test: verify: DIF generated, REFTAG check ...passed 00:07:58.080 Test: verify: DIF not generated, GUARD check ...passed 00:07:58.080 Test: verify: DIF not generated, APPTAG check ...passed 00:07:58.080 Test: verify: DIF not generated, REFTAG check ...[2024-12-13 06:47:02.533293] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:58.080 [2024-12-13 06:47:02.533401] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:58.080 [2024-12-13 06:47:02.533457] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:58.080 [2024-12-13 06:47:02.533496] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:58.080 passed 00:07:58.080 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:58.080 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:07:58.080 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:58.080 Test: verify: REFTAG incorrect, REFTAG ignore ...[2024-12-13 06:47:02.533521] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:58.080 [2024-12-13 06:47:02.533550] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:58.080 [2024-12-13 06:47:02.533603] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:58.080 passed 00:07:58.080 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:58.080 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:07:58.080 Test: generate copy: DIF generated, GUARD check ...passed 00:07:58.080 Test: generate copy: DIF generated, APTTAG check ...[2024-12-13 06:47:02.533756] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:58.080 passed 00:07:58.080 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:58.080 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:58.080 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:58.080 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:58.080 Test: generate copy: iovecs-len validate ...[2024-12-13 06:47:02.534029] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:58.080 passed 00:07:58.080 Test: generate copy: buffer alignment validate ...passed 00:07:58.080 00:07:58.080 Run Summary: Type Total Ran Passed Failed Inactive 00:07:58.080 suites 1 1 n/a 0 0 00:07:58.080 tests 20 20 20 0 0 00:07:58.080 asserts 204 204 204 0 n/a 00:07:58.080 00:07:58.080 Elapsed time = 0.002 seconds 00:07:58.340 00:07:58.340 real 0m0.404s 00:07:58.340 user 0m0.451s 00:07:58.340 sys 0m0.098s 00:07:58.340 06:47:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:58.340 06:47:02 -- common/autotest_common.sh@10 -- # set +x 00:07:58.340 ************************************ 00:07:58.340 END TEST accel_dif_functional_tests 00:07:58.340 ************************************ 00:07:58.340 00:07:58.340 real 0m56.672s 00:07:58.340 user 1m1.872s 00:07:58.340 sys 0m4.216s 00:07:58.340 06:47:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:58.340 06:47:02 -- common/autotest_common.sh@10 -- # set +x 00:07:58.340 ************************************ 00:07:58.340 END TEST accel 00:07:58.340 ************************************ 00:07:58.340 06:47:02 -- spdk/autotest.sh@177 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:58.340 06:47:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:58.340 06:47:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:58.340 06:47:02 -- common/autotest_common.sh@10 -- # set +x 00:07:58.340 ************************************ 00:07:58.340 START TEST accel_rpc 00:07:58.340 ************************************ 00:07:58.340 06:47:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:58.340 * Looking for test storage... 00:07:58.340 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:58.340 06:47:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:58.340 06:47:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:58.340 06:47:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:58.598 06:47:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:58.598 06:47:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:58.598 06:47:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:58.598 06:47:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:58.598 06:47:02 -- scripts/common.sh@335 -- # IFS=.-: 00:07:58.598 06:47:02 -- scripts/common.sh@335 -- # read -ra ver1 00:07:58.598 06:47:02 -- scripts/common.sh@336 -- # IFS=.-: 00:07:58.598 06:47:02 -- scripts/common.sh@336 -- # read -ra ver2 00:07:58.598 06:47:02 -- scripts/common.sh@337 -- # local 'op=<' 00:07:58.598 06:47:02 -- scripts/common.sh@339 -- # ver1_l=2 00:07:58.598 06:47:02 -- scripts/common.sh@340 -- # ver2_l=1 00:07:58.598 06:47:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:58.598 06:47:02 -- scripts/common.sh@343 -- # case "$op" in 00:07:58.598 06:47:02 -- scripts/common.sh@344 -- # : 1 00:07:58.598 06:47:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:58.598 06:47:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:58.598 06:47:02 -- scripts/common.sh@364 -- # decimal 1 00:07:58.598 06:47:02 -- scripts/common.sh@352 -- # local d=1 00:07:58.598 06:47:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:58.598 06:47:02 -- scripts/common.sh@354 -- # echo 1 00:07:58.598 06:47:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:58.598 06:47:02 -- scripts/common.sh@365 -- # decimal 2 00:07:58.598 06:47:02 -- scripts/common.sh@352 -- # local d=2 00:07:58.598 06:47:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:58.598 06:47:02 -- scripts/common.sh@354 -- # echo 2 00:07:58.598 06:47:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:58.598 06:47:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:58.598 06:47:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:58.598 06:47:02 -- scripts/common.sh@367 -- # return 0 00:07:58.598 06:47:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:58.598 06:47:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:58.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.599 --rc genhtml_branch_coverage=1 00:07:58.599 --rc genhtml_function_coverage=1 00:07:58.599 --rc genhtml_legend=1 00:07:58.599 --rc geninfo_all_blocks=1 00:07:58.599 --rc geninfo_unexecuted_blocks=1 00:07:58.599 00:07:58.599 ' 00:07:58.599 06:47:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:58.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.599 --rc genhtml_branch_coverage=1 00:07:58.599 --rc genhtml_function_coverage=1 00:07:58.599 --rc genhtml_legend=1 00:07:58.599 --rc geninfo_all_blocks=1 00:07:58.599 --rc geninfo_unexecuted_blocks=1 00:07:58.599 00:07:58.599 ' 00:07:58.599 06:47:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:58.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.599 --rc genhtml_branch_coverage=1 00:07:58.599 --rc genhtml_function_coverage=1 00:07:58.599 --rc genhtml_legend=1 00:07:58.599 --rc geninfo_all_blocks=1 00:07:58.599 --rc geninfo_unexecuted_blocks=1 00:07:58.599 00:07:58.599 ' 00:07:58.599 06:47:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:58.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.599 --rc genhtml_branch_coverage=1 00:07:58.599 --rc genhtml_function_coverage=1 00:07:58.599 --rc genhtml_legend=1 00:07:58.599 --rc geninfo_all_blocks=1 00:07:58.599 --rc geninfo_unexecuted_blocks=1 00:07:58.599 00:07:58.599 ' 00:07:58.599 06:47:02 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:58.599 06:47:02 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=69285 00:07:58.599 06:47:02 -- accel/accel_rpc.sh@15 -- # waitforlisten 69285 00:07:58.599 06:47:02 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:58.599 06:47:02 -- common/autotest_common.sh@829 -- # '[' -z 69285 ']' 00:07:58.599 06:47:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:58.599 06:47:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:58.599 06:47:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:58.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:58.599 06:47:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:58.599 06:47:02 -- common/autotest_common.sh@10 -- # set +x 00:07:58.599 [2024-12-13 06:47:03.014211] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:58.599 [2024-12-13 06:47:03.014312] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69285 ] 00:07:58.858 [2024-12-13 06:47:03.153280] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.858 [2024-12-13 06:47:03.184568] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:58.858 [2024-12-13 06:47:03.184746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.858 06:47:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:58.858 06:47:03 -- common/autotest_common.sh@862 -- # return 0 00:07:58.858 06:47:03 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:58.858 06:47:03 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:58.858 06:47:03 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:58.858 06:47:03 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:58.858 06:47:03 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:58.858 06:47:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:58.858 06:47:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:58.858 06:47:03 -- common/autotest_common.sh@10 -- # set +x 00:07:58.858 ************************************ 00:07:58.858 START TEST accel_assign_opcode 00:07:58.858 ************************************ 00:07:58.858 06:47:03 -- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite 00:07:58.858 06:47:03 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:58.858 06:47:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.858 06:47:03 -- common/autotest_common.sh@10 -- # set +x 00:07:58.858 [2024-12-13 06:47:03.261168] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:58.858 06:47:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.858 06:47:03 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:58.858 06:47:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.858 06:47:03 -- common/autotest_common.sh@10 -- # set +x 00:07:58.858 [2024-12-13 06:47:03.269166] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:58.858 06:47:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.858 06:47:03 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:58.858 06:47:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.858 06:47:03 -- common/autotest_common.sh@10 -- # set +x 00:07:59.117 06:47:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.117 06:47:03 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:59.117 06:47:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.117 06:47:03 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:59.118 06:47:03 -- common/autotest_common.sh@10 -- # set +x 00:07:59.118 06:47:03 -- accel/accel_rpc.sh@42 -- # grep software 00:07:59.118 06:47:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.118 software 00:07:59.118 00:07:59.118 real 0m0.191s 00:07:59.118 user 0m0.051s 00:07:59.118 sys 0m0.013s 00:07:59.118 06:47:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:59.118 06:47:03 -- common/autotest_common.sh@10 -- # set +x 00:07:59.118 ************************************ 00:07:59.118 END TEST accel_assign_opcode 00:07:59.118 ************************************ 00:07:59.118 06:47:03 -- accel/accel_rpc.sh@55 -- # killprocess 69285 00:07:59.118 06:47:03 -- common/autotest_common.sh@936 -- # '[' -z 69285 ']' 00:07:59.118 06:47:03 -- common/autotest_common.sh@940 -- # kill -0 69285 00:07:59.118 06:47:03 -- common/autotest_common.sh@941 -- # uname 00:07:59.118 06:47:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:59.118 06:47:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69285 00:07:59.118 06:47:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:59.118 killing process with pid 69285 00:07:59.118 06:47:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:59.118 06:47:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69285' 00:07:59.118 06:47:03 -- common/autotest_common.sh@955 -- # kill 69285 00:07:59.118 06:47:03 -- common/autotest_common.sh@960 -- # wait 69285 00:07:59.376 00:07:59.376 real 0m0.961s 00:07:59.376 user 0m0.954s 00:07:59.376 sys 0m0.307s 00:07:59.377 06:47:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:59.377 06:47:03 -- common/autotest_common.sh@10 -- # set +x 00:07:59.377 ************************************ 00:07:59.377 END TEST accel_rpc 00:07:59.377 ************************************ 00:07:59.377 06:47:03 -- spdk/autotest.sh@178 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:59.377 06:47:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:59.377 06:47:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:59.377 06:47:03 -- common/autotest_common.sh@10 -- # set +x 00:07:59.377 ************************************ 00:07:59.377 START TEST app_cmdline 00:07:59.377 ************************************ 00:07:59.377 06:47:03 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:59.377 * Looking for test storage... 00:07:59.377 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:59.377 06:47:03 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:59.377 06:47:03 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:59.377 06:47:03 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:59.636 06:47:03 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:59.636 06:47:03 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:59.636 06:47:03 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:59.636 06:47:03 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:59.636 06:47:03 -- scripts/common.sh@335 -- # IFS=.-: 00:07:59.636 06:47:03 -- scripts/common.sh@335 -- # read -ra ver1 00:07:59.636 06:47:03 -- scripts/common.sh@336 -- # IFS=.-: 00:07:59.636 06:47:03 -- scripts/common.sh@336 -- # read -ra ver2 00:07:59.636 06:47:03 -- scripts/common.sh@337 -- # local 'op=<' 00:07:59.636 06:47:03 -- scripts/common.sh@339 -- # ver1_l=2 00:07:59.636 06:47:03 -- scripts/common.sh@340 -- # ver2_l=1 00:07:59.636 06:47:03 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:59.636 06:47:03 -- scripts/common.sh@343 -- # case "$op" in 00:07:59.636 06:47:03 -- scripts/common.sh@344 -- # : 1 00:07:59.636 06:47:03 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:59.636 06:47:03 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:59.636 06:47:03 -- scripts/common.sh@364 -- # decimal 1 00:07:59.636 06:47:03 -- scripts/common.sh@352 -- # local d=1 00:07:59.636 06:47:03 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:59.636 06:47:03 -- scripts/common.sh@354 -- # echo 1 00:07:59.636 06:47:03 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:59.636 06:47:03 -- scripts/common.sh@365 -- # decimal 2 00:07:59.636 06:47:03 -- scripts/common.sh@352 -- # local d=2 00:07:59.636 06:47:03 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:59.636 06:47:03 -- scripts/common.sh@354 -- # echo 2 00:07:59.636 06:47:03 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:59.636 06:47:03 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:59.636 06:47:03 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:59.636 06:47:03 -- scripts/common.sh@367 -- # return 0 00:07:59.636 06:47:03 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:59.636 06:47:03 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:59.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.636 --rc genhtml_branch_coverage=1 00:07:59.636 --rc genhtml_function_coverage=1 00:07:59.636 --rc genhtml_legend=1 00:07:59.636 --rc geninfo_all_blocks=1 00:07:59.636 --rc geninfo_unexecuted_blocks=1 00:07:59.636 00:07:59.636 ' 00:07:59.636 06:47:03 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:59.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.636 --rc genhtml_branch_coverage=1 00:07:59.636 --rc genhtml_function_coverage=1 00:07:59.636 --rc genhtml_legend=1 00:07:59.636 --rc geninfo_all_blocks=1 00:07:59.636 --rc geninfo_unexecuted_blocks=1 00:07:59.636 00:07:59.636 ' 00:07:59.636 06:47:03 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:59.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.636 --rc genhtml_branch_coverage=1 00:07:59.636 --rc genhtml_function_coverage=1 00:07:59.636 --rc genhtml_legend=1 00:07:59.636 --rc geninfo_all_blocks=1 00:07:59.636 --rc geninfo_unexecuted_blocks=1 00:07:59.636 00:07:59.636 ' 00:07:59.636 06:47:03 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:59.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.636 --rc genhtml_branch_coverage=1 00:07:59.636 --rc genhtml_function_coverage=1 00:07:59.636 --rc genhtml_legend=1 00:07:59.636 --rc geninfo_all_blocks=1 00:07:59.636 --rc geninfo_unexecuted_blocks=1 00:07:59.636 00:07:59.636 ' 00:07:59.636 06:47:03 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:59.636 06:47:03 -- app/cmdline.sh@17 -- # spdk_tgt_pid=69372 00:07:59.636 06:47:03 -- app/cmdline.sh@18 -- # waitforlisten 69372 00:07:59.636 06:47:03 -- common/autotest_common.sh@829 -- # '[' -z 69372 ']' 00:07:59.636 06:47:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.636 06:47:03 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:59.636 06:47:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:59.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.636 06:47:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.636 06:47:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:59.636 06:47:03 -- common/autotest_common.sh@10 -- # set +x 00:07:59.636 [2024-12-13 06:47:04.010822] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:59.636 [2024-12-13 06:47:04.010938] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69372 ] 00:07:59.636 [2024-12-13 06:47:04.145244] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.896 [2024-12-13 06:47:04.178896] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:59.896 [2024-12-13 06:47:04.179092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.464 06:47:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:00.464 06:47:04 -- common/autotest_common.sh@862 -- # return 0 00:08:00.464 06:47:04 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:08:00.723 { 00:08:00.723 "version": "SPDK v24.01.1-pre git sha1 c13c99a5e", 00:08:00.723 "fields": { 00:08:00.723 "major": 24, 00:08:00.723 "minor": 1, 00:08:00.723 "patch": 1, 00:08:00.723 "suffix": "-pre", 00:08:00.723 "commit": "c13c99a5e" 00:08:00.723 } 00:08:00.723 } 00:08:00.723 06:47:05 -- app/cmdline.sh@22 -- # expected_methods=() 00:08:00.723 06:47:05 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:00.723 06:47:05 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:00.723 06:47:05 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:00.723 06:47:05 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:00.723 06:47:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.723 06:47:05 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:00.723 06:47:05 -- common/autotest_common.sh@10 -- # set +x 00:08:00.723 06:47:05 -- app/cmdline.sh@26 -- # sort 00:08:00.723 06:47:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.982 06:47:05 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:00.982 06:47:05 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:00.982 06:47:05 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:00.982 06:47:05 -- common/autotest_common.sh@650 -- # local es=0 00:08:00.982 06:47:05 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:00.982 06:47:05 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:00.982 06:47:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:00.982 06:47:05 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:00.982 06:47:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:00.982 06:47:05 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:00.982 06:47:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:00.982 06:47:05 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:00.982 06:47:05 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:00.982 06:47:05 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:00.982 request: 00:08:00.982 { 00:08:00.982 "method": "env_dpdk_get_mem_stats", 00:08:00.982 "req_id": 1 00:08:00.982 } 00:08:00.982 Got JSON-RPC error response 00:08:00.982 response: 00:08:00.982 { 00:08:00.983 "code": -32601, 00:08:00.983 "message": "Method not found" 00:08:00.983 } 00:08:00.983 06:47:05 -- common/autotest_common.sh@653 -- # es=1 00:08:00.983 06:47:05 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:00.983 06:47:05 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:00.983 06:47:05 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:00.983 06:47:05 -- app/cmdline.sh@1 -- # killprocess 69372 00:08:00.983 06:47:05 -- common/autotest_common.sh@936 -- # '[' -z 69372 ']' 00:08:00.983 06:47:05 -- common/autotest_common.sh@940 -- # kill -0 69372 00:08:00.983 06:47:05 -- common/autotest_common.sh@941 -- # uname 00:08:00.983 06:47:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:00.983 06:47:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69372 00:08:01.242 06:47:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:01.242 killing process with pid 69372 00:08:01.242 06:47:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:01.242 06:47:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69372' 00:08:01.242 06:47:05 -- common/autotest_common.sh@955 -- # kill 69372 00:08:01.242 06:47:05 -- common/autotest_common.sh@960 -- # wait 69372 00:08:01.242 00:08:01.242 real 0m1.960s 00:08:01.242 user 0m2.544s 00:08:01.242 sys 0m0.364s 00:08:01.242 06:47:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:01.242 06:47:05 -- common/autotest_common.sh@10 -- # set +x 00:08:01.242 ************************************ 00:08:01.242 END TEST app_cmdline 00:08:01.242 ************************************ 00:08:01.501 06:47:05 -- spdk/autotest.sh@179 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:01.501 06:47:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:01.501 06:47:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:01.501 06:47:05 -- common/autotest_common.sh@10 -- # set +x 00:08:01.501 ************************************ 00:08:01.501 START TEST version 00:08:01.501 ************************************ 00:08:01.501 06:47:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:01.501 * Looking for test storage... 00:08:01.501 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:01.501 06:47:05 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:01.501 06:47:05 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:01.501 06:47:05 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:01.501 06:47:05 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:01.501 06:47:05 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:01.501 06:47:05 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:01.501 06:47:05 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:01.501 06:47:05 -- scripts/common.sh@335 -- # IFS=.-: 00:08:01.501 06:47:05 -- scripts/common.sh@335 -- # read -ra ver1 00:08:01.501 06:47:05 -- scripts/common.sh@336 -- # IFS=.-: 00:08:01.501 06:47:05 -- scripts/common.sh@336 -- # read -ra ver2 00:08:01.501 06:47:05 -- scripts/common.sh@337 -- # local 'op=<' 00:08:01.501 06:47:05 -- scripts/common.sh@339 -- # ver1_l=2 00:08:01.501 06:47:05 -- scripts/common.sh@340 -- # ver2_l=1 00:08:01.501 06:47:05 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:01.501 06:47:05 -- scripts/common.sh@343 -- # case "$op" in 00:08:01.501 06:47:05 -- scripts/common.sh@344 -- # : 1 00:08:01.501 06:47:05 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:01.501 06:47:05 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:01.501 06:47:05 -- scripts/common.sh@364 -- # decimal 1 00:08:01.501 06:47:05 -- scripts/common.sh@352 -- # local d=1 00:08:01.501 06:47:05 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:01.501 06:47:05 -- scripts/common.sh@354 -- # echo 1 00:08:01.501 06:47:05 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:01.501 06:47:05 -- scripts/common.sh@365 -- # decimal 2 00:08:01.501 06:47:05 -- scripts/common.sh@352 -- # local d=2 00:08:01.501 06:47:05 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:01.501 06:47:05 -- scripts/common.sh@354 -- # echo 2 00:08:01.501 06:47:05 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:01.501 06:47:05 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:01.501 06:47:05 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:01.502 06:47:05 -- scripts/common.sh@367 -- # return 0 00:08:01.502 06:47:05 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:01.502 06:47:05 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:01.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.502 --rc genhtml_branch_coverage=1 00:08:01.502 --rc genhtml_function_coverage=1 00:08:01.502 --rc genhtml_legend=1 00:08:01.502 --rc geninfo_all_blocks=1 00:08:01.502 --rc geninfo_unexecuted_blocks=1 00:08:01.502 00:08:01.502 ' 00:08:01.502 06:47:05 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:01.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.502 --rc genhtml_branch_coverage=1 00:08:01.502 --rc genhtml_function_coverage=1 00:08:01.502 --rc genhtml_legend=1 00:08:01.502 --rc geninfo_all_blocks=1 00:08:01.502 --rc geninfo_unexecuted_blocks=1 00:08:01.502 00:08:01.502 ' 00:08:01.502 06:47:05 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:01.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.502 --rc genhtml_branch_coverage=1 00:08:01.502 --rc genhtml_function_coverage=1 00:08:01.502 --rc genhtml_legend=1 00:08:01.502 --rc geninfo_all_blocks=1 00:08:01.502 --rc geninfo_unexecuted_blocks=1 00:08:01.502 00:08:01.502 ' 00:08:01.502 06:47:05 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:01.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.502 --rc genhtml_branch_coverage=1 00:08:01.502 --rc genhtml_function_coverage=1 00:08:01.502 --rc genhtml_legend=1 00:08:01.502 --rc geninfo_all_blocks=1 00:08:01.502 --rc geninfo_unexecuted_blocks=1 00:08:01.502 00:08:01.502 ' 00:08:01.502 06:47:05 -- app/version.sh@17 -- # get_header_version major 00:08:01.502 06:47:05 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:01.502 06:47:05 -- app/version.sh@14 -- # cut -f2 00:08:01.502 06:47:05 -- app/version.sh@14 -- # tr -d '"' 00:08:01.502 06:47:05 -- app/version.sh@17 -- # major=24 00:08:01.502 06:47:05 -- app/version.sh@18 -- # get_header_version minor 00:08:01.502 06:47:05 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:01.502 06:47:05 -- app/version.sh@14 -- # cut -f2 00:08:01.502 06:47:05 -- app/version.sh@14 -- # tr -d '"' 00:08:01.502 06:47:05 -- app/version.sh@18 -- # minor=1 00:08:01.502 06:47:05 -- app/version.sh@19 -- # get_header_version patch 00:08:01.502 06:47:05 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:01.502 06:47:05 -- app/version.sh@14 -- # tr -d '"' 00:08:01.502 06:47:05 -- app/version.sh@14 -- # cut -f2 00:08:01.502 06:47:05 -- app/version.sh@19 -- # patch=1 00:08:01.502 06:47:05 -- app/version.sh@20 -- # get_header_version suffix 00:08:01.502 06:47:05 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:01.502 06:47:05 -- app/version.sh@14 -- # cut -f2 00:08:01.502 06:47:05 -- app/version.sh@14 -- # tr -d '"' 00:08:01.502 06:47:05 -- app/version.sh@20 -- # suffix=-pre 00:08:01.502 06:47:05 -- app/version.sh@22 -- # version=24.1 00:08:01.502 06:47:05 -- app/version.sh@25 -- # (( patch != 0 )) 00:08:01.502 06:47:05 -- app/version.sh@25 -- # version=24.1.1 00:08:01.502 06:47:05 -- app/version.sh@28 -- # version=24.1.1rc0 00:08:01.502 06:47:05 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:01.502 06:47:05 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:01.761 06:47:06 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:08:01.761 06:47:06 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:08:01.761 00:08:01.761 real 0m0.247s 00:08:01.761 user 0m0.163s 00:08:01.761 sys 0m0.117s 00:08:01.761 06:47:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:01.761 06:47:06 -- common/autotest_common.sh@10 -- # set +x 00:08:01.761 ************************************ 00:08:01.761 END TEST version 00:08:01.761 ************************************ 00:08:01.761 06:47:06 -- spdk/autotest.sh@181 -- # '[' 0 -eq 1 ']' 00:08:01.761 06:47:06 -- spdk/autotest.sh@191 -- # uname -s 00:08:01.761 06:47:06 -- spdk/autotest.sh@191 -- # [[ Linux == Linux ]] 00:08:01.761 06:47:06 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:08:01.761 06:47:06 -- spdk/autotest.sh@192 -- # [[ 1 -eq 1 ]] 00:08:01.761 06:47:06 -- spdk/autotest.sh@198 -- # [[ 0 -eq 0 ]] 00:08:01.761 06:47:06 -- spdk/autotest.sh@199 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:08:01.761 06:47:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:01.761 06:47:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:01.761 06:47:06 -- common/autotest_common.sh@10 -- # set +x 00:08:01.761 ************************************ 00:08:01.761 START TEST spdk_dd 00:08:01.761 ************************************ 00:08:01.761 06:47:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:08:01.761 * Looking for test storage... 00:08:01.761 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:01.761 06:47:06 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:01.761 06:47:06 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:01.762 06:47:06 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:01.762 06:47:06 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:01.762 06:47:06 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:01.762 06:47:06 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:01.762 06:47:06 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:01.762 06:47:06 -- scripts/common.sh@335 -- # IFS=.-: 00:08:01.762 06:47:06 -- scripts/common.sh@335 -- # read -ra ver1 00:08:01.762 06:47:06 -- scripts/common.sh@336 -- # IFS=.-: 00:08:01.762 06:47:06 -- scripts/common.sh@336 -- # read -ra ver2 00:08:01.762 06:47:06 -- scripts/common.sh@337 -- # local 'op=<' 00:08:01.762 06:47:06 -- scripts/common.sh@339 -- # ver1_l=2 00:08:01.762 06:47:06 -- scripts/common.sh@340 -- # ver2_l=1 00:08:01.762 06:47:06 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:01.762 06:47:06 -- scripts/common.sh@343 -- # case "$op" in 00:08:01.762 06:47:06 -- scripts/common.sh@344 -- # : 1 00:08:01.762 06:47:06 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:01.762 06:47:06 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:01.762 06:47:06 -- scripts/common.sh@364 -- # decimal 1 00:08:01.762 06:47:06 -- scripts/common.sh@352 -- # local d=1 00:08:01.762 06:47:06 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:01.762 06:47:06 -- scripts/common.sh@354 -- # echo 1 00:08:01.762 06:47:06 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:01.762 06:47:06 -- scripts/common.sh@365 -- # decimal 2 00:08:01.762 06:47:06 -- scripts/common.sh@352 -- # local d=2 00:08:01.762 06:47:06 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:01.762 06:47:06 -- scripts/common.sh@354 -- # echo 2 00:08:01.762 06:47:06 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:01.762 06:47:06 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:01.762 06:47:06 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:01.762 06:47:06 -- scripts/common.sh@367 -- # return 0 00:08:01.762 06:47:06 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:01.762 06:47:06 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:01.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.762 --rc genhtml_branch_coverage=1 00:08:01.762 --rc genhtml_function_coverage=1 00:08:01.762 --rc genhtml_legend=1 00:08:01.762 --rc geninfo_all_blocks=1 00:08:01.762 --rc geninfo_unexecuted_blocks=1 00:08:01.762 00:08:01.762 ' 00:08:01.762 06:47:06 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:01.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.762 --rc genhtml_branch_coverage=1 00:08:01.762 --rc genhtml_function_coverage=1 00:08:01.762 --rc genhtml_legend=1 00:08:01.762 --rc geninfo_all_blocks=1 00:08:01.762 --rc geninfo_unexecuted_blocks=1 00:08:01.762 00:08:01.762 ' 00:08:01.762 06:47:06 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:01.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.762 --rc genhtml_branch_coverage=1 00:08:01.762 --rc genhtml_function_coverage=1 00:08:01.762 --rc genhtml_legend=1 00:08:01.762 --rc geninfo_all_blocks=1 00:08:01.762 --rc geninfo_unexecuted_blocks=1 00:08:01.762 00:08:01.762 ' 00:08:01.762 06:47:06 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:01.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.762 --rc genhtml_branch_coverage=1 00:08:01.762 --rc genhtml_function_coverage=1 00:08:01.762 --rc genhtml_legend=1 00:08:01.762 --rc geninfo_all_blocks=1 00:08:01.762 --rc geninfo_unexecuted_blocks=1 00:08:01.762 00:08:01.762 ' 00:08:01.762 06:47:06 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:01.762 06:47:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:01.762 06:47:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:01.762 06:47:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:01.762 06:47:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.762 06:47:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.762 06:47:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.762 06:47:06 -- paths/export.sh@5 -- # export PATH 00:08:01.762 06:47:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.762 06:47:06 -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:02.331 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:02.331 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:02.331 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:02.331 06:47:06 -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:08:02.331 06:47:06 -- dd/dd.sh@11 -- # nvme_in_userspace 00:08:02.331 06:47:06 -- scripts/common.sh@311 -- # local bdf bdfs 00:08:02.331 06:47:06 -- scripts/common.sh@312 -- # local nvmes 00:08:02.331 06:47:06 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:08:02.331 06:47:06 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:08:02.331 06:47:06 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:08:02.331 06:47:06 -- scripts/common.sh@297 -- # local bdf= 00:08:02.331 06:47:06 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:08:02.331 06:47:06 -- scripts/common.sh@232 -- # local class 00:08:02.331 06:47:06 -- scripts/common.sh@233 -- # local subclass 00:08:02.331 06:47:06 -- scripts/common.sh@234 -- # local progif 00:08:02.331 06:47:06 -- scripts/common.sh@235 -- # printf %02x 1 00:08:02.331 06:47:06 -- scripts/common.sh@235 -- # class=01 00:08:02.331 06:47:06 -- scripts/common.sh@236 -- # printf %02x 8 00:08:02.331 06:47:06 -- scripts/common.sh@236 -- # subclass=08 00:08:02.331 06:47:06 -- scripts/common.sh@237 -- # printf %02x 2 00:08:02.331 06:47:06 -- scripts/common.sh@237 -- # progif=02 00:08:02.332 06:47:06 -- scripts/common.sh@239 -- # hash lspci 00:08:02.332 06:47:06 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:08:02.332 06:47:06 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:08:02.332 06:47:06 -- scripts/common.sh@242 -- # grep -i -- -p02 00:08:02.332 06:47:06 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:08:02.332 06:47:06 -- scripts/common.sh@244 -- # tr -d '"' 00:08:02.332 06:47:06 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:08:02.332 06:47:06 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:08:02.332 06:47:06 -- scripts/common.sh@15 -- # local i 00:08:02.332 06:47:06 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:08:02.332 06:47:06 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:08:02.332 06:47:06 -- scripts/common.sh@24 -- # return 0 00:08:02.332 06:47:06 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:08:02.332 06:47:06 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:08:02.332 06:47:06 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:08:02.332 06:47:06 -- scripts/common.sh@15 -- # local i 00:08:02.332 06:47:06 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:08:02.332 06:47:06 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:08:02.332 06:47:06 -- scripts/common.sh@24 -- # return 0 00:08:02.332 06:47:06 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:08:02.332 06:47:06 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:08:02.332 06:47:06 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:08:02.332 06:47:06 -- scripts/common.sh@322 -- # uname -s 00:08:02.332 06:47:06 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:08:02.332 06:47:06 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:08:02.332 06:47:06 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:08:02.332 06:47:06 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:08:02.332 06:47:06 -- scripts/common.sh@322 -- # uname -s 00:08:02.332 06:47:06 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:08:02.332 06:47:06 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:08:02.332 06:47:06 -- scripts/common.sh@327 -- # (( 2 )) 00:08:02.332 06:47:06 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:08:02.332 06:47:06 -- dd/dd.sh@13 -- # check_liburing 00:08:02.332 06:47:06 -- dd/common.sh@139 -- # local lib so 00:08:02.332 06:47:06 -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:08:02.332 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.332 06:47:06 -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:08:02.332 06:47:06 -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:02.332 06:47:06 -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:08:02.332 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.332 06:47:06 -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.5.0 == liburing.so.* ]] 00:08:02.332 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.332 06:47:06 -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.5.0 == liburing.so.* ]] 00:08:02.332 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.332 06:47:06 -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.6.0 == liburing.so.* ]] 00:08:02.332 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.332 06:47:06 -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.5.0 == liburing.so.* ]] 00:08:02.332 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.332 06:47:06 -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.5.0 == liburing.so.* ]] 00:08:02.332 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.332 06:47:06 -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.5.0 == liburing.so.* ]] 00:08:02.332 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.332 06:47:06 -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.5.0 == liburing.so.* ]] 00:08:02.332 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.332 06:47:06 -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.5.0 == liburing.so.* ]] 00:08:02.332 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.332 06:47:06 -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.5.0 == liburing.so.* ]] 00:08:02.332 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.332 06:47:06 -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.5.0 == liburing.so.* ]] 00:08:02.332 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.332 06:47:06 -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.5.0 == liburing.so.* ]] 00:08:02.332 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.332 06:47:06 -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.5.0 == liburing.so.* ]] 00:08:02.332 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.332 06:47:06 -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.9.0 == liburing.so.* ]] 00:08:02.332 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.332 06:47:06 -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.10.1 == liburing.so.* ]] 00:08:02.332 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.332 06:47:06 -- dd/common.sh@143 -- # [[ libspdk_lvol.so.9.1 == liburing.so.* ]] 00:08:02.332 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.332 06:47:06 -- dd/common.sh@143 -- # [[ libspdk_blob.so.10.1 == liburing.so.* ]] 00:08:02.332 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.332 06:47:06 -- dd/common.sh@143 -- # [[ libspdk_nvme.so.12.0 == liburing.so.* ]] 00:08:02.332 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.332 06:47:06 -- dd/common.sh@143 -- # [[ libspdk_rdma.so.5.0 == liburing.so.* ]] 00:08:02.332 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.332 06:47:06 -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.5.0 == liburing.so.* ]] 00:08:02.332 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.332 06:47:06 -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.5.0 == liburing.so.* ]] 00:08:02.332 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.332 06:47:06 -- dd/common.sh@143 -- # [[ libspdk_ftl.so.8.0 == liburing.so.* ]] 00:08:02.332 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.332 06:47:06 -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.5.0 == liburing.so.* ]] 00:08:02.332 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.332 06:47:06 -- dd/common.sh@143 -- # [[ libspdk_virtio.so.6.0 == liburing.so.* ]] 00:08:02.332 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.332 06:47:06 -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.4.0 == liburing.so.* ]] 00:08:02.332 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.332 06:47:06 -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.5.0 == liburing.so.* ]] 00:08:02.332 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.332 06:47:06 -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.5.0 == liburing.so.* ]] 00:08:02.332 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.332 06:47:06 -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.1.0 == liburing.so.* ]] 00:08:02.332 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.332 06:47:06 -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.5.0 == liburing.so.* ]] 00:08:02.332 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.332 06:47:06 -- dd/common.sh@143 -- # [[ libspdk_ioat.so.6.0 == liburing.so.* ]] 00:08:02.332 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.332 06:47:06 -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.4.0 == liburing.so.* ]] 00:08:02.332 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.332 06:47:06 -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.2.0 == liburing.so.* ]] 00:08:02.332 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.332 06:47:06 -- dd/common.sh@143 -- # [[ libspdk_idxd.so.11.0 == liburing.so.* ]] 00:08:02.332 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.332 06:47:06 -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.3.0 == liburing.so.* ]] 00:08:02.332 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.332 06:47:06 -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.13.0 == liburing.so.* ]] 00:08:02.332 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.332 06:47:06 -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.3.0 == liburing.so.* ]] 00:08:02.332 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.332 06:47:06 -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.3.0 == liburing.so.* ]] 00:08:02.332 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.332 06:47:06 -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.5.0 == liburing.so.* ]] 00:08:02.332 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.332 06:47:06 -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.4.0 == liburing.so.* ]] 00:08:02.332 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.332 06:47:06 -- dd/common.sh@143 -- # [[ libspdk_event.so.12.0 == liburing.so.* ]] 00:08:02.332 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.332 06:47:06 -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.5.0 == liburing.so.* ]] 00:08:02.332 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.332 06:47:06 -- dd/common.sh@143 -- # [[ libspdk_bdev.so.14.0 == liburing.so.* ]] 00:08:02.332 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.332 06:47:06 -- dd/common.sh@143 -- # [[ libspdk_notify.so.5.0 == liburing.so.* ]] 00:08:02.332 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.332 06:47:06 -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.5.0 == liburing.so.* ]] 00:08:02.332 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.332 06:47:06 -- dd/common.sh@143 -- # [[ libspdk_accel.so.14.0 == liburing.so.* ]] 00:08:02.332 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.332 06:47:06 -- dd/common.sh@143 -- # [[ libspdk_dma.so.3.0 == liburing.so.* ]] 00:08:02.332 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.332 06:47:06 -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.5.0 == liburing.so.* ]] 00:08:02.332 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.332 06:47:06 -- dd/common.sh@143 -- # [[ libspdk_vmd.so.5.0 == liburing.so.* ]] 00:08:02.332 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.332 06:47:06 -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.4.0 == liburing.so.* ]] 00:08:02.332 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.332 06:47:06 -- dd/common.sh@143 -- # [[ libspdk_sock.so.8.0 == liburing.so.* ]] 00:08:02.332 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.332 06:47:06 -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.2.0 == liburing.so.* ]] 00:08:02.332 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.332 06:47:06 -- dd/common.sh@143 -- # [[ libspdk_init.so.4.0 == liburing.so.* ]] 00:08:02.332 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.333 06:47:06 -- dd/common.sh@143 -- # [[ libspdk_thread.so.9.0 == liburing.so.* ]] 00:08:02.333 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.333 06:47:06 -- dd/common.sh@143 -- # [[ libspdk_trace.so.9.0 == liburing.so.* ]] 00:08:02.333 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.333 06:47:06 -- dd/common.sh@143 -- # [[ libspdk_rpc.so.5.0 == liburing.so.* ]] 00:08:02.333 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.333 06:47:06 -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.5.1 == liburing.so.* ]] 00:08:02.333 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.333 06:47:06 -- dd/common.sh@143 -- # [[ libspdk_json.so.5.1 == liburing.so.* ]] 00:08:02.333 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.333 06:47:06 -- dd/common.sh@143 -- # [[ libspdk_util.so.8.0 == liburing.so.* ]] 00:08:02.333 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.333 06:47:06 -- dd/common.sh@143 -- # [[ libspdk_log.so.6.1 == liburing.so.* ]] 00:08:02.333 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.333 06:47:06 -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:08:02.333 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.333 06:47:06 -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:08:02.333 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.333 06:47:06 -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:08:02.333 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.333 06:47:06 -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:08:02.333 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.333 06:47:06 -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:08:02.333 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.333 06:47:06 -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:08:02.333 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.333 06:47:06 -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:08:02.333 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.333 06:47:06 -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:08:02.333 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.333 06:47:06 -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:08:02.333 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.333 06:47:06 -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:08:02.333 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.333 06:47:06 -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:08:02.333 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.333 06:47:06 -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:08:02.333 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.333 06:47:06 -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:08:02.333 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.333 06:47:06 -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:08:02.333 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.333 06:47:06 -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:08:02.333 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.333 06:47:06 -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:08:02.333 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.333 06:47:06 -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:08:02.333 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.333 06:47:06 -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:08:02.333 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.333 06:47:06 -- dd/common.sh@143 -- # [[ libisal_crypto.so.2 == liburing.so.* ]] 00:08:02.333 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.333 06:47:06 -- dd/common.sh@143 -- # [[ libaccel-config.so.1 == liburing.so.* ]] 00:08:02.333 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.333 06:47:06 -- dd/common.sh@143 -- # [[ libaio.so.1 == liburing.so.* ]] 00:08:02.333 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.333 06:47:06 -- dd/common.sh@143 -- # [[ libiscsi.so.9 == liburing.so.* ]] 00:08:02.333 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.333 06:47:06 -- dd/common.sh@143 -- # [[ libubsan.so.1 == liburing.so.* ]] 00:08:02.333 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.333 06:47:06 -- dd/common.sh@143 -- # [[ libc.so.6 == liburing.so.* ]] 00:08:02.333 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.333 06:47:06 -- dd/common.sh@143 -- # [[ libibverbs.so.1 == liburing.so.* ]] 00:08:02.333 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.333 06:47:06 -- dd/common.sh@143 -- # [[ librdmacm.so.1 == liburing.so.* ]] 00:08:02.333 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.333 06:47:06 -- dd/common.sh@143 -- # [[ libfuse3.so.3 == liburing.so.* ]] 00:08:02.333 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.333 06:47:06 -- dd/common.sh@143 -- # [[ /lib64/ld-linux-x86-64.so.2 == liburing.so.* ]] 00:08:02.333 06:47:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.333 06:47:06 -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:08:02.333 06:47:06 -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:08:02.333 * spdk_dd linked to liburing 00:08:02.333 06:47:06 -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:08:02.333 06:47:06 -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:08:02.333 06:47:06 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:02.333 06:47:06 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:02.333 06:47:06 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:02.333 06:47:06 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:02.333 06:47:06 -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:08:02.333 06:47:06 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:02.333 06:47:06 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:02.333 06:47:06 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:02.333 06:47:06 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:02.333 06:47:06 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:02.333 06:47:06 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:02.333 06:47:06 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:02.333 06:47:06 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:02.333 06:47:06 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:02.333 06:47:06 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:02.333 06:47:06 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:02.333 06:47:06 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:02.333 06:47:06 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:02.333 06:47:06 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:08:02.333 06:47:06 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:02.333 06:47:06 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:02.333 06:47:06 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:02.333 06:47:06 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:02.333 06:47:06 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:02.333 06:47:06 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:02.333 06:47:06 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:02.333 06:47:06 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:02.333 06:47:06 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:02.333 06:47:06 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:02.333 06:47:06 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:02.333 06:47:06 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:02.333 06:47:06 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:02.333 06:47:06 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:02.333 06:47:06 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:08:02.333 06:47:06 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:08:02.333 06:47:06 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:08:02.333 06:47:06 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:02.333 06:47:06 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:02.333 06:47:06 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:02.333 06:47:06 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:02.333 06:47:06 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:08:02.333 06:47:06 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:02.333 06:47:06 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:02.333 06:47:06 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:02.333 06:47:06 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:02.333 06:47:06 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:08:02.333 06:47:06 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:08:02.333 06:47:06 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:02.333 06:47:06 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:08:02.333 06:47:06 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:08:02.333 06:47:06 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:08:02.333 06:47:06 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:08:02.333 06:47:06 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=y 00:08:02.333 06:47:06 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:08:02.333 06:47:06 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:08:02.333 06:47:06 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:08:02.333 06:47:06 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:08:02.333 06:47:06 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:08:02.333 06:47:06 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:08:02.333 06:47:06 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:08:02.333 06:47:06 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:08:02.333 06:47:06 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:08:02.333 06:47:06 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:08:02.333 06:47:06 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:08:02.333 06:47:06 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:08:02.333 06:47:06 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:02.333 06:47:06 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:08:02.333 06:47:06 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:08:02.333 06:47:06 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:08:02.333 06:47:06 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:08:02.333 06:47:06 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:08:02.333 06:47:06 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:08:02.333 06:47:06 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:08:02.333 06:47:06 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:08:02.333 06:47:06 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:08:02.333 06:47:06 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:08:02.333 06:47:06 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:02.333 06:47:06 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:08:02.333 06:47:06 -- common/build_config.sh@79 -- # CONFIG_URING=y 00:08:02.333 06:47:06 -- dd/common.sh@149 -- # [[ y != y ]] 00:08:02.333 06:47:06 -- dd/common.sh@152 -- # [[ ! -e /usr/lib64/liburing.so.2 ]] 00:08:02.334 06:47:06 -- dd/common.sh@156 -- # export liburing_in_use=1 00:08:02.334 06:47:06 -- dd/common.sh@156 -- # liburing_in_use=1 00:08:02.334 06:47:06 -- dd/common.sh@157 -- # return 0 00:08:02.334 06:47:06 -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:08:02.334 06:47:06 -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 0000:00:07.0 00:08:02.334 06:47:06 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:02.334 06:47:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:02.334 06:47:06 -- common/autotest_common.sh@10 -- # set +x 00:08:02.334 ************************************ 00:08:02.334 START TEST spdk_dd_basic_rw 00:08:02.334 ************************************ 00:08:02.334 06:47:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 0000:00:07.0 00:08:02.334 * Looking for test storage... 00:08:02.334 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:02.334 06:47:06 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:02.334 06:47:06 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:02.334 06:47:06 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:02.595 06:47:06 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:02.595 06:47:06 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:02.595 06:47:06 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:02.595 06:47:06 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:02.595 06:47:06 -- scripts/common.sh@335 -- # IFS=.-: 00:08:02.595 06:47:06 -- scripts/common.sh@335 -- # read -ra ver1 00:08:02.595 06:47:06 -- scripts/common.sh@336 -- # IFS=.-: 00:08:02.595 06:47:06 -- scripts/common.sh@336 -- # read -ra ver2 00:08:02.595 06:47:06 -- scripts/common.sh@337 -- # local 'op=<' 00:08:02.595 06:47:06 -- scripts/common.sh@339 -- # ver1_l=2 00:08:02.595 06:47:06 -- scripts/common.sh@340 -- # ver2_l=1 00:08:02.595 06:47:06 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:02.595 06:47:06 -- scripts/common.sh@343 -- # case "$op" in 00:08:02.595 06:47:06 -- scripts/common.sh@344 -- # : 1 00:08:02.595 06:47:06 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:02.595 06:47:06 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:02.595 06:47:06 -- scripts/common.sh@364 -- # decimal 1 00:08:02.595 06:47:06 -- scripts/common.sh@352 -- # local d=1 00:08:02.595 06:47:06 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:02.595 06:47:06 -- scripts/common.sh@354 -- # echo 1 00:08:02.595 06:47:06 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:02.595 06:47:06 -- scripts/common.sh@365 -- # decimal 2 00:08:02.595 06:47:06 -- scripts/common.sh@352 -- # local d=2 00:08:02.595 06:47:06 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:02.595 06:47:06 -- scripts/common.sh@354 -- # echo 2 00:08:02.595 06:47:06 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:02.595 06:47:06 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:02.595 06:47:06 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:02.595 06:47:06 -- scripts/common.sh@367 -- # return 0 00:08:02.595 06:47:06 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:02.595 06:47:06 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:02.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.595 --rc genhtml_branch_coverage=1 00:08:02.595 --rc genhtml_function_coverage=1 00:08:02.595 --rc genhtml_legend=1 00:08:02.595 --rc geninfo_all_blocks=1 00:08:02.595 --rc geninfo_unexecuted_blocks=1 00:08:02.595 00:08:02.595 ' 00:08:02.595 06:47:06 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:02.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.595 --rc genhtml_branch_coverage=1 00:08:02.595 --rc genhtml_function_coverage=1 00:08:02.595 --rc genhtml_legend=1 00:08:02.595 --rc geninfo_all_blocks=1 00:08:02.595 --rc geninfo_unexecuted_blocks=1 00:08:02.595 00:08:02.595 ' 00:08:02.595 06:47:06 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:02.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.595 --rc genhtml_branch_coverage=1 00:08:02.595 --rc genhtml_function_coverage=1 00:08:02.595 --rc genhtml_legend=1 00:08:02.595 --rc geninfo_all_blocks=1 00:08:02.595 --rc geninfo_unexecuted_blocks=1 00:08:02.595 00:08:02.595 ' 00:08:02.595 06:47:06 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:02.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.595 --rc genhtml_branch_coverage=1 00:08:02.595 --rc genhtml_function_coverage=1 00:08:02.595 --rc genhtml_legend=1 00:08:02.595 --rc geninfo_all_blocks=1 00:08:02.595 --rc geninfo_unexecuted_blocks=1 00:08:02.595 00:08:02.595 ' 00:08:02.595 06:47:06 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:02.595 06:47:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:02.595 06:47:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:02.595 06:47:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:02.595 06:47:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.595 06:47:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.595 06:47:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.595 06:47:06 -- paths/export.sh@5 -- # export PATH 00:08:02.595 06:47:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.595 06:47:06 -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:08:02.595 06:47:06 -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:08:02.595 06:47:06 -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:08:02.595 06:47:06 -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:06.0 00:08:02.595 06:47:06 -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:08:02.595 06:47:06 -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:08:02.595 06:47:06 -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:08:02.595 06:47:06 -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:02.595 06:47:06 -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:02.595 06:47:06 -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:06.0 00:08:02.596 06:47:06 -- dd/common.sh@124 -- # local pci=0000:00:06.0 lbaf id 00:08:02.596 06:47:06 -- dd/common.sh@126 -- # mapfile -t id 00:08:02.596 06:47:06 -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:06.0' 00:08:02.596 06:47:07 -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 96 Data Units Written: 9 Host Read Commands: 2190 Host Write Commands: 95 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:08:02.596 06:47:07 -- dd/common.sh@130 -- # lbaf=04 00:08:02.597 06:47:07 -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 96 Data Units Written: 9 Host Read Commands: 2190 Host Write Commands: 95 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:08:02.597 06:47:07 -- dd/common.sh@132 -- # lbaf=4096 00:08:02.597 06:47:07 -- dd/common.sh@134 -- # echo 4096 00:08:02.856 06:47:07 -- dd/basic_rw.sh@93 -- # native_bs=4096 00:08:02.856 06:47:07 -- dd/basic_rw.sh@96 -- # : 00:08:02.857 06:47:07 -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:08:02.857 06:47:07 -- dd/basic_rw.sh@96 -- # gen_conf 00:08:02.857 06:47:07 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:08:02.857 06:47:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:02.857 06:47:07 -- dd/common.sh@31 -- # xtrace_disable 00:08:02.857 06:47:07 -- common/autotest_common.sh@10 -- # set +x 00:08:02.857 06:47:07 -- common/autotest_common.sh@10 -- # set +x 00:08:02.857 ************************************ 00:08:02.857 START TEST dd_bs_lt_native_bs 00:08:02.857 ************************************ 00:08:02.857 06:47:07 -- common/autotest_common.sh@1114 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:08:02.857 06:47:07 -- common/autotest_common.sh@650 -- # local es=0 00:08:02.857 06:47:07 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:08:02.857 06:47:07 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:02.857 06:47:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:02.857 06:47:07 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:02.857 06:47:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:02.857 06:47:07 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:02.857 06:47:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:02.857 06:47:07 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:02.857 06:47:07 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:02.857 06:47:07 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:08:02.857 { 00:08:02.857 "subsystems": [ 00:08:02.857 { 00:08:02.857 "subsystem": "bdev", 00:08:02.857 "config": [ 00:08:02.857 { 00:08:02.857 "params": { 00:08:02.857 "trtype": "pcie", 00:08:02.857 "traddr": "0000:00:06.0", 00:08:02.857 "name": "Nvme0" 00:08:02.857 }, 00:08:02.857 "method": "bdev_nvme_attach_controller" 00:08:02.857 }, 00:08:02.857 { 00:08:02.857 "method": "bdev_wait_for_examine" 00:08:02.857 } 00:08:02.857 ] 00:08:02.857 } 00:08:02.857 ] 00:08:02.857 } 00:08:02.857 [2024-12-13 06:47:07.176957] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:02.857 [2024-12-13 06:47:07.177821] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69716 ] 00:08:02.857 [2024-12-13 06:47:07.316614] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.857 [2024-12-13 06:47:07.357076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.115 [2024-12-13 06:47:07.473714] spdk_dd.c:1145:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:08:03.115 [2024-12-13 06:47:07.473784] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:03.115 [2024-12-13 06:47:07.543996] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:03.115 06:47:07 -- common/autotest_common.sh@653 -- # es=234 00:08:03.115 06:47:07 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:03.115 06:47:07 -- common/autotest_common.sh@662 -- # es=106 00:08:03.115 06:47:07 -- common/autotest_common.sh@663 -- # case "$es" in 00:08:03.115 06:47:07 -- common/autotest_common.sh@670 -- # es=1 00:08:03.115 06:47:07 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:03.115 00:08:03.115 real 0m0.484s 00:08:03.115 user 0m0.325s 00:08:03.115 sys 0m0.114s 00:08:03.115 06:47:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:03.115 06:47:07 -- common/autotest_common.sh@10 -- # set +x 00:08:03.115 ************************************ 00:08:03.115 END TEST dd_bs_lt_native_bs 00:08:03.115 ************************************ 00:08:03.374 06:47:07 -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:08:03.374 06:47:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:03.374 06:47:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:03.374 06:47:07 -- common/autotest_common.sh@10 -- # set +x 00:08:03.374 ************************************ 00:08:03.374 START TEST dd_rw 00:08:03.374 ************************************ 00:08:03.374 06:47:07 -- common/autotest_common.sh@1114 -- # basic_rw 4096 00:08:03.374 06:47:07 -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:08:03.374 06:47:07 -- dd/basic_rw.sh@12 -- # local count size 00:08:03.374 06:47:07 -- dd/basic_rw.sh@13 -- # local qds bss 00:08:03.374 06:47:07 -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:08:03.374 06:47:07 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:08:03.374 06:47:07 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:08:03.374 06:47:07 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:08:03.374 06:47:07 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:08:03.374 06:47:07 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:08:03.374 06:47:07 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:08:03.374 06:47:07 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:08:03.374 06:47:07 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:03.374 06:47:07 -- dd/basic_rw.sh@23 -- # count=15 00:08:03.374 06:47:07 -- dd/basic_rw.sh@24 -- # count=15 00:08:03.374 06:47:07 -- dd/basic_rw.sh@25 -- # size=61440 00:08:03.374 06:47:07 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:08:03.374 06:47:07 -- dd/common.sh@98 -- # xtrace_disable 00:08:03.374 06:47:07 -- common/autotest_common.sh@10 -- # set +x 00:08:03.942 06:47:08 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:08:03.942 06:47:08 -- dd/basic_rw.sh@30 -- # gen_conf 00:08:03.942 06:47:08 -- dd/common.sh@31 -- # xtrace_disable 00:08:03.942 06:47:08 -- common/autotest_common.sh@10 -- # set +x 00:08:03.942 [2024-12-13 06:47:08.298323] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:03.942 [2024-12-13 06:47:08.298455] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69751 ] 00:08:03.942 { 00:08:03.942 "subsystems": [ 00:08:03.942 { 00:08:03.942 "subsystem": "bdev", 00:08:03.942 "config": [ 00:08:03.942 { 00:08:03.942 "params": { 00:08:03.942 "trtype": "pcie", 00:08:03.942 "traddr": "0000:00:06.0", 00:08:03.942 "name": "Nvme0" 00:08:03.942 }, 00:08:03.942 "method": "bdev_nvme_attach_controller" 00:08:03.942 }, 00:08:03.942 { 00:08:03.942 "method": "bdev_wait_for_examine" 00:08:03.942 } 00:08:03.942 ] 00:08:03.942 } 00:08:03.942 ] 00:08:03.942 } 00:08:03.942 [2024-12-13 06:47:08.438414] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.202 [2024-12-13 06:47:08.478468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.202  [2024-12-13T06:47:08.980Z] Copying: 60/60 [kB] (average 19 MBps) 00:08:04.461 00:08:04.461 06:47:08 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:08:04.461 06:47:08 -- dd/basic_rw.sh@37 -- # gen_conf 00:08:04.461 06:47:08 -- dd/common.sh@31 -- # xtrace_disable 00:08:04.461 06:47:08 -- common/autotest_common.sh@10 -- # set +x 00:08:04.461 [2024-12-13 06:47:08.792904] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:04.461 [2024-12-13 06:47:08.793005] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69765 ] 00:08:04.461 { 00:08:04.461 "subsystems": [ 00:08:04.461 { 00:08:04.461 "subsystem": "bdev", 00:08:04.461 "config": [ 00:08:04.461 { 00:08:04.461 "params": { 00:08:04.461 "trtype": "pcie", 00:08:04.461 "traddr": "0000:00:06.0", 00:08:04.461 "name": "Nvme0" 00:08:04.461 }, 00:08:04.461 "method": "bdev_nvme_attach_controller" 00:08:04.461 }, 00:08:04.461 { 00:08:04.461 "method": "bdev_wait_for_examine" 00:08:04.461 } 00:08:04.461 ] 00:08:04.461 } 00:08:04.461 ] 00:08:04.461 } 00:08:04.461 [2024-12-13 06:47:08.923636] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.461 [2024-12-13 06:47:08.954222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.721  [2024-12-13T06:47:09.240Z] Copying: 60/60 [kB] (average 29 MBps) 00:08:04.721 00:08:04.721 06:47:09 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:04.721 06:47:09 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:08:04.721 06:47:09 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:04.721 06:47:09 -- dd/common.sh@11 -- # local nvme_ref= 00:08:04.721 06:47:09 -- dd/common.sh@12 -- # local size=61440 00:08:04.721 06:47:09 -- dd/common.sh@14 -- # local bs=1048576 00:08:04.721 06:47:09 -- dd/common.sh@15 -- # local count=1 00:08:04.721 06:47:09 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:04.721 06:47:09 -- dd/common.sh@18 -- # gen_conf 00:08:04.721 06:47:09 -- dd/common.sh@31 -- # xtrace_disable 00:08:04.721 06:47:09 -- common/autotest_common.sh@10 -- # set +x 00:08:04.980 [2024-12-13 06:47:09.265775] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:04.980 [2024-12-13 06:47:09.265869] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69773 ] 00:08:04.980 { 00:08:04.980 "subsystems": [ 00:08:04.980 { 00:08:04.980 "subsystem": "bdev", 00:08:04.980 "config": [ 00:08:04.980 { 00:08:04.980 "params": { 00:08:04.980 "trtype": "pcie", 00:08:04.980 "traddr": "0000:00:06.0", 00:08:04.980 "name": "Nvme0" 00:08:04.980 }, 00:08:04.980 "method": "bdev_nvme_attach_controller" 00:08:04.980 }, 00:08:04.980 { 00:08:04.980 "method": "bdev_wait_for_examine" 00:08:04.980 } 00:08:04.980 ] 00:08:04.980 } 00:08:04.980 ] 00:08:04.980 } 00:08:04.980 [2024-12-13 06:47:09.405319] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.980 [2024-12-13 06:47:09.436891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.244  [2024-12-13T06:47:09.763Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:08:05.244 00:08:05.244 06:47:09 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:05.244 06:47:09 -- dd/basic_rw.sh@23 -- # count=15 00:08:05.244 06:47:09 -- dd/basic_rw.sh@24 -- # count=15 00:08:05.244 06:47:09 -- dd/basic_rw.sh@25 -- # size=61440 00:08:05.244 06:47:09 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:08:05.244 06:47:09 -- dd/common.sh@98 -- # xtrace_disable 00:08:05.244 06:47:09 -- common/autotest_common.sh@10 -- # set +x 00:08:05.863 06:47:10 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:08:05.863 06:47:10 -- dd/basic_rw.sh@30 -- # gen_conf 00:08:05.863 06:47:10 -- dd/common.sh@31 -- # xtrace_disable 00:08:05.863 06:47:10 -- common/autotest_common.sh@10 -- # set +x 00:08:05.863 { 00:08:05.863 "subsystems": [ 00:08:05.863 { 00:08:05.863 "subsystem": "bdev", 00:08:05.863 "config": [ 00:08:05.863 { 00:08:05.863 "params": { 00:08:05.863 "trtype": "pcie", 00:08:05.863 "traddr": "0000:00:06.0", 00:08:05.863 "name": "Nvme0" 00:08:05.863 }, 00:08:05.863 "method": "bdev_nvme_attach_controller" 00:08:05.863 }, 00:08:05.863 { 00:08:05.863 "method": "bdev_wait_for_examine" 00:08:05.863 } 00:08:05.863 ] 00:08:05.863 } 00:08:05.863 ] 00:08:05.863 } 00:08:05.863 [2024-12-13 06:47:10.259502] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:05.863 [2024-12-13 06:47:10.259608] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69791 ] 00:08:06.122 [2024-12-13 06:47:10.396232] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.122 [2024-12-13 06:47:10.428043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.122  [2024-12-13T06:47:10.900Z] Copying: 60/60 [kB] (average 58 MBps) 00:08:06.381 00:08:06.381 06:47:10 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:08:06.381 06:47:10 -- dd/basic_rw.sh@37 -- # gen_conf 00:08:06.381 06:47:10 -- dd/common.sh@31 -- # xtrace_disable 00:08:06.381 06:47:10 -- common/autotest_common.sh@10 -- # set +x 00:08:06.381 [2024-12-13 06:47:10.742566] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:06.381 [2024-12-13 06:47:10.742661] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69803 ] 00:08:06.381 { 00:08:06.381 "subsystems": [ 00:08:06.381 { 00:08:06.381 "subsystem": "bdev", 00:08:06.381 "config": [ 00:08:06.381 { 00:08:06.381 "params": { 00:08:06.381 "trtype": "pcie", 00:08:06.381 "traddr": "0000:00:06.0", 00:08:06.381 "name": "Nvme0" 00:08:06.381 }, 00:08:06.381 "method": "bdev_nvme_attach_controller" 00:08:06.381 }, 00:08:06.381 { 00:08:06.381 "method": "bdev_wait_for_examine" 00:08:06.381 } 00:08:06.381 ] 00:08:06.381 } 00:08:06.381 ] 00:08:06.381 } 00:08:06.381 [2024-12-13 06:47:10.884505] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.641 [2024-12-13 06:47:10.915662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.641  [2024-12-13T06:47:11.419Z] Copying: 60/60 [kB] (average 58 MBps) 00:08:06.900 00:08:06.900 06:47:11 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:06.900 06:47:11 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:08:06.900 06:47:11 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:06.900 06:47:11 -- dd/common.sh@11 -- # local nvme_ref= 00:08:06.900 06:47:11 -- dd/common.sh@12 -- # local size=61440 00:08:06.900 06:47:11 -- dd/common.sh@14 -- # local bs=1048576 00:08:06.900 06:47:11 -- dd/common.sh@15 -- # local count=1 00:08:06.900 06:47:11 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:06.900 06:47:11 -- dd/common.sh@18 -- # gen_conf 00:08:06.900 06:47:11 -- dd/common.sh@31 -- # xtrace_disable 00:08:06.900 06:47:11 -- common/autotest_common.sh@10 -- # set +x 00:08:06.900 { 00:08:06.900 "subsystems": [ 00:08:06.900 { 00:08:06.900 "subsystem": "bdev", 00:08:06.900 "config": [ 00:08:06.900 { 00:08:06.900 "params": { 00:08:06.900 "trtype": "pcie", 00:08:06.900 "traddr": "0000:00:06.0", 00:08:06.900 "name": "Nvme0" 00:08:06.900 }, 00:08:06.900 "method": "bdev_nvme_attach_controller" 00:08:06.900 }, 00:08:06.900 { 00:08:06.900 "method": "bdev_wait_for_examine" 00:08:06.900 } 00:08:06.900 ] 00:08:06.900 } 00:08:06.900 ] 00:08:06.900 } 00:08:06.900 [2024-12-13 06:47:11.230565] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:06.900 [2024-12-13 06:47:11.230659] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69817 ] 00:08:06.900 [2024-12-13 06:47:11.368412] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.900 [2024-12-13 06:47:11.399157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.159  [2024-12-13T06:47:11.678Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:07.159 00:08:07.159 06:47:11 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:08:07.159 06:47:11 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:07.159 06:47:11 -- dd/basic_rw.sh@23 -- # count=7 00:08:07.159 06:47:11 -- dd/basic_rw.sh@24 -- # count=7 00:08:07.159 06:47:11 -- dd/basic_rw.sh@25 -- # size=57344 00:08:07.159 06:47:11 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:08:07.159 06:47:11 -- dd/common.sh@98 -- # xtrace_disable 00:08:07.159 06:47:11 -- common/autotest_common.sh@10 -- # set +x 00:08:07.727 06:47:12 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:08:07.727 06:47:12 -- dd/basic_rw.sh@30 -- # gen_conf 00:08:07.727 06:47:12 -- dd/common.sh@31 -- # xtrace_disable 00:08:07.727 06:47:12 -- common/autotest_common.sh@10 -- # set +x 00:08:07.727 [2024-12-13 06:47:12.206700] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:07.727 [2024-12-13 06:47:12.206846] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69835 ] 00:08:07.727 { 00:08:07.727 "subsystems": [ 00:08:07.727 { 00:08:07.727 "subsystem": "bdev", 00:08:07.727 "config": [ 00:08:07.727 { 00:08:07.727 "params": { 00:08:07.727 "trtype": "pcie", 00:08:07.727 "traddr": "0000:00:06.0", 00:08:07.727 "name": "Nvme0" 00:08:07.727 }, 00:08:07.727 "method": "bdev_nvme_attach_controller" 00:08:07.727 }, 00:08:07.727 { 00:08:07.727 "method": "bdev_wait_for_examine" 00:08:07.727 } 00:08:07.727 ] 00:08:07.727 } 00:08:07.727 ] 00:08:07.727 } 00:08:07.987 [2024-12-13 06:47:12.345762] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.987 [2024-12-13 06:47:12.379032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.987  [2024-12-13T06:47:12.765Z] Copying: 56/56 [kB] (average 54 MBps) 00:08:08.246 00:08:08.246 06:47:12 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:08:08.246 06:47:12 -- dd/basic_rw.sh@37 -- # gen_conf 00:08:08.246 06:47:12 -- dd/common.sh@31 -- # xtrace_disable 00:08:08.246 06:47:12 -- common/autotest_common.sh@10 -- # set +x 00:08:08.246 [2024-12-13 06:47:12.697257] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:08.246 [2024-12-13 06:47:12.697405] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69847 ] 00:08:08.246 { 00:08:08.246 "subsystems": [ 00:08:08.246 { 00:08:08.246 "subsystem": "bdev", 00:08:08.246 "config": [ 00:08:08.246 { 00:08:08.246 "params": { 00:08:08.246 "trtype": "pcie", 00:08:08.246 "traddr": "0000:00:06.0", 00:08:08.246 "name": "Nvme0" 00:08:08.246 }, 00:08:08.246 "method": "bdev_nvme_attach_controller" 00:08:08.246 }, 00:08:08.246 { 00:08:08.246 "method": "bdev_wait_for_examine" 00:08:08.246 } 00:08:08.246 ] 00:08:08.246 } 00:08:08.246 ] 00:08:08.246 } 00:08:08.505 [2024-12-13 06:47:12.837084] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.505 [2024-12-13 06:47:12.871782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.505  [2024-12-13T06:47:13.284Z] Copying: 56/56 [kB] (average 54 MBps) 00:08:08.765 00:08:08.765 06:47:13 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:08.765 06:47:13 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:08:08.765 06:47:13 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:08.765 06:47:13 -- dd/common.sh@11 -- # local nvme_ref= 00:08:08.765 06:47:13 -- dd/common.sh@12 -- # local size=57344 00:08:08.765 06:47:13 -- dd/common.sh@14 -- # local bs=1048576 00:08:08.765 06:47:13 -- dd/common.sh@15 -- # local count=1 00:08:08.765 06:47:13 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:08.765 06:47:13 -- dd/common.sh@18 -- # gen_conf 00:08:08.765 06:47:13 -- dd/common.sh@31 -- # xtrace_disable 00:08:08.765 06:47:13 -- common/autotest_common.sh@10 -- # set +x 00:08:08.765 [2024-12-13 06:47:13.180965] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:08.765 [2024-12-13 06:47:13.181085] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69861 ] 00:08:08.765 { 00:08:08.765 "subsystems": [ 00:08:08.765 { 00:08:08.765 "subsystem": "bdev", 00:08:08.765 "config": [ 00:08:08.765 { 00:08:08.765 "params": { 00:08:08.765 "trtype": "pcie", 00:08:08.765 "traddr": "0000:00:06.0", 00:08:08.765 "name": "Nvme0" 00:08:08.765 }, 00:08:08.765 "method": "bdev_nvme_attach_controller" 00:08:08.765 }, 00:08:08.765 { 00:08:08.765 "method": "bdev_wait_for_examine" 00:08:08.765 } 00:08:08.765 ] 00:08:08.765 } 00:08:08.765 ] 00:08:08.765 } 00:08:09.023 [2024-12-13 06:47:13.305979] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.023 [2024-12-13 06:47:13.337574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.023  [2024-12-13T06:47:13.801Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:09.282 00:08:09.282 06:47:13 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:09.282 06:47:13 -- dd/basic_rw.sh@23 -- # count=7 00:08:09.282 06:47:13 -- dd/basic_rw.sh@24 -- # count=7 00:08:09.282 06:47:13 -- dd/basic_rw.sh@25 -- # size=57344 00:08:09.282 06:47:13 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:08:09.282 06:47:13 -- dd/common.sh@98 -- # xtrace_disable 00:08:09.282 06:47:13 -- common/autotest_common.sh@10 -- # set +x 00:08:09.851 06:47:14 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:08:09.851 06:47:14 -- dd/basic_rw.sh@30 -- # gen_conf 00:08:09.851 06:47:14 -- dd/common.sh@31 -- # xtrace_disable 00:08:09.851 06:47:14 -- common/autotest_common.sh@10 -- # set +x 00:08:09.851 [2024-12-13 06:47:14.185875] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:09.851 [2024-12-13 06:47:14.186018] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69879 ] 00:08:09.851 { 00:08:09.851 "subsystems": [ 00:08:09.851 { 00:08:09.851 "subsystem": "bdev", 00:08:09.851 "config": [ 00:08:09.851 { 00:08:09.851 "params": { 00:08:09.851 "trtype": "pcie", 00:08:09.851 "traddr": "0000:00:06.0", 00:08:09.851 "name": "Nvme0" 00:08:09.851 }, 00:08:09.851 "method": "bdev_nvme_attach_controller" 00:08:09.851 }, 00:08:09.851 { 00:08:09.851 "method": "bdev_wait_for_examine" 00:08:09.851 } 00:08:09.851 ] 00:08:09.851 } 00:08:09.851 ] 00:08:09.851 } 00:08:09.851 [2024-12-13 06:47:14.325911] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.851 [2024-12-13 06:47:14.359584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.110  [2024-12-13T06:47:14.889Z] Copying: 56/56 [kB] (average 54 MBps) 00:08:10.370 00:08:10.370 06:47:14 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:08:10.370 06:47:14 -- dd/basic_rw.sh@37 -- # gen_conf 00:08:10.370 06:47:14 -- dd/common.sh@31 -- # xtrace_disable 00:08:10.370 06:47:14 -- common/autotest_common.sh@10 -- # set +x 00:08:10.370 [2024-12-13 06:47:14.676688] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:10.370 [2024-12-13 06:47:14.676802] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69890 ] 00:08:10.370 { 00:08:10.370 "subsystems": [ 00:08:10.370 { 00:08:10.370 "subsystem": "bdev", 00:08:10.370 "config": [ 00:08:10.370 { 00:08:10.370 "params": { 00:08:10.370 "trtype": "pcie", 00:08:10.370 "traddr": "0000:00:06.0", 00:08:10.370 "name": "Nvme0" 00:08:10.370 }, 00:08:10.370 "method": "bdev_nvme_attach_controller" 00:08:10.370 }, 00:08:10.370 { 00:08:10.370 "method": "bdev_wait_for_examine" 00:08:10.370 } 00:08:10.370 ] 00:08:10.370 } 00:08:10.370 ] 00:08:10.370 } 00:08:10.370 [2024-12-13 06:47:14.807253] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.370 [2024-12-13 06:47:14.842185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.630  [2024-12-13T06:47:15.149Z] Copying: 56/56 [kB] (average 54 MBps) 00:08:10.630 00:08:10.630 06:47:15 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:10.630 06:47:15 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:08:10.630 06:47:15 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:10.630 06:47:15 -- dd/common.sh@11 -- # local nvme_ref= 00:08:10.630 06:47:15 -- dd/common.sh@12 -- # local size=57344 00:08:10.630 06:47:15 -- dd/common.sh@14 -- # local bs=1048576 00:08:10.630 06:47:15 -- dd/common.sh@15 -- # local count=1 00:08:10.630 06:47:15 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:10.630 06:47:15 -- dd/common.sh@18 -- # gen_conf 00:08:10.630 06:47:15 -- dd/common.sh@31 -- # xtrace_disable 00:08:10.630 06:47:15 -- common/autotest_common.sh@10 -- # set +x 00:08:10.888 [2024-12-13 06:47:15.181961] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:10.888 [2024-12-13 06:47:15.182086] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69905 ] 00:08:10.888 { 00:08:10.888 "subsystems": [ 00:08:10.888 { 00:08:10.888 "subsystem": "bdev", 00:08:10.888 "config": [ 00:08:10.888 { 00:08:10.888 "params": { 00:08:10.889 "trtype": "pcie", 00:08:10.889 "traddr": "0000:00:06.0", 00:08:10.889 "name": "Nvme0" 00:08:10.889 }, 00:08:10.889 "method": "bdev_nvme_attach_controller" 00:08:10.889 }, 00:08:10.889 { 00:08:10.889 "method": "bdev_wait_for_examine" 00:08:10.889 } 00:08:10.889 ] 00:08:10.889 } 00:08:10.889 ] 00:08:10.889 } 00:08:10.889 [2024-12-13 06:47:15.321218] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.889 [2024-12-13 06:47:15.352287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.147  [2024-12-13T06:47:15.666Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:11.147 00:08:11.147 06:47:15 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:08:11.147 06:47:15 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:11.147 06:47:15 -- dd/basic_rw.sh@23 -- # count=3 00:08:11.147 06:47:15 -- dd/basic_rw.sh@24 -- # count=3 00:08:11.147 06:47:15 -- dd/basic_rw.sh@25 -- # size=49152 00:08:11.147 06:47:15 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:08:11.147 06:47:15 -- dd/common.sh@98 -- # xtrace_disable 00:08:11.147 06:47:15 -- common/autotest_common.sh@10 -- # set +x 00:08:11.715 06:47:16 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:08:11.715 06:47:16 -- dd/basic_rw.sh@30 -- # gen_conf 00:08:11.715 06:47:16 -- dd/common.sh@31 -- # xtrace_disable 00:08:11.715 06:47:16 -- common/autotest_common.sh@10 -- # set +x 00:08:11.715 [2024-12-13 06:47:16.137088] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:11.715 [2024-12-13 06:47:16.137212] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69923 ] 00:08:11.715 { 00:08:11.715 "subsystems": [ 00:08:11.715 { 00:08:11.715 "subsystem": "bdev", 00:08:11.715 "config": [ 00:08:11.715 { 00:08:11.715 "params": { 00:08:11.715 "trtype": "pcie", 00:08:11.715 "traddr": "0000:00:06.0", 00:08:11.715 "name": "Nvme0" 00:08:11.715 }, 00:08:11.715 "method": "bdev_nvme_attach_controller" 00:08:11.715 }, 00:08:11.715 { 00:08:11.715 "method": "bdev_wait_for_examine" 00:08:11.715 } 00:08:11.715 ] 00:08:11.715 } 00:08:11.715 ] 00:08:11.715 } 00:08:11.974 [2024-12-13 06:47:16.277368] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.974 [2024-12-13 06:47:16.311029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.974  [2024-12-13T06:47:16.753Z] Copying: 48/48 [kB] (average 46 MBps) 00:08:12.234 00:08:12.234 06:47:16 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:08:12.234 06:47:16 -- dd/basic_rw.sh@37 -- # gen_conf 00:08:12.234 06:47:16 -- dd/common.sh@31 -- # xtrace_disable 00:08:12.234 06:47:16 -- common/autotest_common.sh@10 -- # set +x 00:08:12.234 [2024-12-13 06:47:16.641257] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:12.234 [2024-12-13 06:47:16.641417] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69930 ] 00:08:12.234 { 00:08:12.234 "subsystems": [ 00:08:12.234 { 00:08:12.234 "subsystem": "bdev", 00:08:12.234 "config": [ 00:08:12.234 { 00:08:12.234 "params": { 00:08:12.234 "trtype": "pcie", 00:08:12.234 "traddr": "0000:00:06.0", 00:08:12.234 "name": "Nvme0" 00:08:12.234 }, 00:08:12.234 "method": "bdev_nvme_attach_controller" 00:08:12.234 }, 00:08:12.234 { 00:08:12.234 "method": "bdev_wait_for_examine" 00:08:12.234 } 00:08:12.234 ] 00:08:12.234 } 00:08:12.234 ] 00:08:12.234 } 00:08:12.493 [2024-12-13 06:47:16.777718] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.493 [2024-12-13 06:47:16.815348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.493  [2024-12-13T06:47:17.272Z] Copying: 48/48 [kB] (average 46 MBps) 00:08:12.753 00:08:12.753 06:47:17 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:12.753 06:47:17 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:08:12.753 06:47:17 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:12.753 06:47:17 -- dd/common.sh@11 -- # local nvme_ref= 00:08:12.753 06:47:17 -- dd/common.sh@12 -- # local size=49152 00:08:12.753 06:47:17 -- dd/common.sh@14 -- # local bs=1048576 00:08:12.753 06:47:17 -- dd/common.sh@15 -- # local count=1 00:08:12.753 06:47:17 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:12.753 06:47:17 -- dd/common.sh@18 -- # gen_conf 00:08:12.753 06:47:17 -- dd/common.sh@31 -- # xtrace_disable 00:08:12.753 06:47:17 -- common/autotest_common.sh@10 -- # set +x 00:08:12.753 [2024-12-13 06:47:17.143615] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:12.753 [2024-12-13 06:47:17.143735] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69949 ] 00:08:12.753 { 00:08:12.753 "subsystems": [ 00:08:12.753 { 00:08:12.753 "subsystem": "bdev", 00:08:12.753 "config": [ 00:08:12.753 { 00:08:12.753 "params": { 00:08:12.753 "trtype": "pcie", 00:08:12.753 "traddr": "0000:00:06.0", 00:08:12.753 "name": "Nvme0" 00:08:12.753 }, 00:08:12.753 "method": "bdev_nvme_attach_controller" 00:08:12.753 }, 00:08:12.753 { 00:08:12.753 "method": "bdev_wait_for_examine" 00:08:12.753 } 00:08:12.753 ] 00:08:12.753 } 00:08:12.753 ] 00:08:12.753 } 00:08:13.012 [2024-12-13 06:47:17.284820] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.012 [2024-12-13 06:47:17.317679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.012  [2024-12-13T06:47:17.790Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:13.271 00:08:13.271 06:47:17 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:13.271 06:47:17 -- dd/basic_rw.sh@23 -- # count=3 00:08:13.271 06:47:17 -- dd/basic_rw.sh@24 -- # count=3 00:08:13.271 06:47:17 -- dd/basic_rw.sh@25 -- # size=49152 00:08:13.271 06:47:17 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:08:13.271 06:47:17 -- dd/common.sh@98 -- # xtrace_disable 00:08:13.271 06:47:17 -- common/autotest_common.sh@10 -- # set +x 00:08:13.840 06:47:18 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:08:13.840 06:47:18 -- dd/basic_rw.sh@30 -- # gen_conf 00:08:13.840 06:47:18 -- dd/common.sh@31 -- # xtrace_disable 00:08:13.840 06:47:18 -- common/autotest_common.sh@10 -- # set +x 00:08:13.840 [2024-12-13 06:47:18.125672] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:13.840 [2024-12-13 06:47:18.125781] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69967 ] 00:08:13.840 { 00:08:13.840 "subsystems": [ 00:08:13.840 { 00:08:13.840 "subsystem": "bdev", 00:08:13.840 "config": [ 00:08:13.840 { 00:08:13.840 "params": { 00:08:13.840 "trtype": "pcie", 00:08:13.840 "traddr": "0000:00:06.0", 00:08:13.840 "name": "Nvme0" 00:08:13.840 }, 00:08:13.840 "method": "bdev_nvme_attach_controller" 00:08:13.840 }, 00:08:13.840 { 00:08:13.840 "method": "bdev_wait_for_examine" 00:08:13.840 } 00:08:13.840 ] 00:08:13.840 } 00:08:13.840 ] 00:08:13.840 } 00:08:13.840 [2024-12-13 06:47:18.264939] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.840 [2024-12-13 06:47:18.297850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.099  [2024-12-13T06:47:18.618Z] Copying: 48/48 [kB] (average 46 MBps) 00:08:14.099 00:08:14.099 06:47:18 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:08:14.099 06:47:18 -- dd/basic_rw.sh@37 -- # gen_conf 00:08:14.099 06:47:18 -- dd/common.sh@31 -- # xtrace_disable 00:08:14.099 06:47:18 -- common/autotest_common.sh@10 -- # set +x 00:08:14.358 [2024-12-13 06:47:18.628298] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:14.359 [2024-12-13 06:47:18.628495] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69974 ] 00:08:14.359 { 00:08:14.359 "subsystems": [ 00:08:14.359 { 00:08:14.359 "subsystem": "bdev", 00:08:14.359 "config": [ 00:08:14.359 { 00:08:14.359 "params": { 00:08:14.359 "trtype": "pcie", 00:08:14.359 "traddr": "0000:00:06.0", 00:08:14.359 "name": "Nvme0" 00:08:14.359 }, 00:08:14.359 "method": "bdev_nvme_attach_controller" 00:08:14.359 }, 00:08:14.359 { 00:08:14.359 "method": "bdev_wait_for_examine" 00:08:14.359 } 00:08:14.359 ] 00:08:14.359 } 00:08:14.359 ] 00:08:14.359 } 00:08:14.359 [2024-12-13 06:47:18.768958] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.359 [2024-12-13 06:47:18.802338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.618  [2024-12-13T06:47:19.137Z] Copying: 48/48 [kB] (average 46 MBps) 00:08:14.618 00:08:14.618 06:47:19 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:14.618 06:47:19 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:08:14.618 06:47:19 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:14.618 06:47:19 -- dd/common.sh@11 -- # local nvme_ref= 00:08:14.618 06:47:19 -- dd/common.sh@12 -- # local size=49152 00:08:14.618 06:47:19 -- dd/common.sh@14 -- # local bs=1048576 00:08:14.618 06:47:19 -- dd/common.sh@15 -- # local count=1 00:08:14.618 06:47:19 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:14.618 06:47:19 -- dd/common.sh@18 -- # gen_conf 00:08:14.618 06:47:19 -- dd/common.sh@31 -- # xtrace_disable 00:08:14.618 06:47:19 -- common/autotest_common.sh@10 -- # set +x 00:08:14.618 [2024-12-13 06:47:19.117153] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:14.618 [2024-12-13 06:47:19.117259] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69993 ] 00:08:14.618 { 00:08:14.618 "subsystems": [ 00:08:14.618 { 00:08:14.618 "subsystem": "bdev", 00:08:14.618 "config": [ 00:08:14.618 { 00:08:14.618 "params": { 00:08:14.618 "trtype": "pcie", 00:08:14.618 "traddr": "0000:00:06.0", 00:08:14.618 "name": "Nvme0" 00:08:14.618 }, 00:08:14.618 "method": "bdev_nvme_attach_controller" 00:08:14.618 }, 00:08:14.618 { 00:08:14.618 "method": "bdev_wait_for_examine" 00:08:14.618 } 00:08:14.618 ] 00:08:14.618 } 00:08:14.618 ] 00:08:14.618 } 00:08:14.877 [2024-12-13 06:47:19.248228] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.877 [2024-12-13 06:47:19.281002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.877  [2024-12-13T06:47:19.655Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:15.136 00:08:15.136 00:08:15.136 real 0m11.886s 00:08:15.136 user 0m8.673s 00:08:15.136 sys 0m2.153s 00:08:15.136 06:47:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:15.136 ************************************ 00:08:15.136 END TEST dd_rw 00:08:15.136 ************************************ 00:08:15.136 06:47:19 -- common/autotest_common.sh@10 -- # set +x 00:08:15.136 06:47:19 -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:08:15.136 06:47:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:15.136 06:47:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:15.136 06:47:19 -- common/autotest_common.sh@10 -- # set +x 00:08:15.136 ************************************ 00:08:15.136 START TEST dd_rw_offset 00:08:15.136 ************************************ 00:08:15.136 06:47:19 -- common/autotest_common.sh@1114 -- # basic_offset 00:08:15.136 06:47:19 -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:08:15.136 06:47:19 -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:08:15.136 06:47:19 -- dd/common.sh@98 -- # xtrace_disable 00:08:15.136 06:47:19 -- common/autotest_common.sh@10 -- # set +x 00:08:15.395 06:47:19 -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:08:15.396 06:47:19 -- dd/basic_rw.sh@56 -- # data=6d7febb1qxufxwaj5g91rjj4yhdo2xogv8bn4plx7xjm35r7gk76cumbkvtfol6dndwo98njjv7ohq5tgw0rjxcmjkqkeotwml64tfkedgfh28e7xmqri7zy46i527yqk2j5n78nxylgnr8sbrz72mjjf0agq4f1odsch9epgjncfoq831xqjgv63amtnn14qcpacdba8mgkmcf2osslcw6ikyj0v1xxdvgqn4og2d5jyyabtgfvxzv8g219xsgsln3dn06a6rnotwkvzzu758lk817xas2f8oryk3evwojsvwnbya6bn64s6ns7xr0s0kkel9df5c3npf4yuk90mcvx0buswr7jqid96rs7ms0w2bfazrj6kvirs9vyl5xgky6mzfanbzv06jmvef8d1szcym3ayrvo90k1jr3cw68tf2uwu7k73fz36qb66r7raqnvceqjfn570y88vmnrwu675b9znxtk42jebq263pkmew2brbhgt25fzv28qeqahry17tzj10utz8k4n4bwmftredglsoq7q81ahgxv4fooa431vokl76971j1s1237ev3wv1p3j0nmm09qfnu06d8f5n55m62crioffguc1nwg4qe3h4edz3euwwt7ds7dabwhf61b7vi2l5f21eutxty6tvfj1umeuifk0jpscujen0zihmj8lrle2gk5jfxun3dxr2y7fsj6pqasjt8sh8jj2419i4fm8un02ki6mqkntux2t9aa5hewv682aac3mumxj6trkp26k21xhu932s4plhb2lr4twl2knwhzulo8jxnt4j1lomcj0mbb7a6ot88t2q4xacmxyzr94v4v7603ue6l9fs3he4zrh40g42g1kr0vepd85tv6znuqdl2nc0zlc0oq5swulfuqmm8q2gojg2kuxhoytey9m0znno2hpub2zaeus5wwm1148lcusdem8wrmqf0x7agvxs3widn388aa8yjq4q6lg3j42h0257s7tlp12x21d9si9d9gw1ag0oawq72s628bdn5bcxeziubu3nrl7gx8tynarwam4uk5yypv4ucihmj3cae40vqq65c039gezwf55v9ulmza847625xxy2b19bnecftoe5orvp4xq8o0mzeszt8rnckr06bsom83mykdiezb9gipqrc2jutbceaxlu5ijmsuvs1hngpzmpidm5gpm6bwlrswrf16yd02e8hrrcajnvgm5su190piisvzdamll4yjvpe3jv0na9qmpez4ey796fb8kdaviwpvwfh974kjg0nga70owastsz4kp6083t7denvj15us9u1t53jo86q0qnz3z7v9utcacmg80jc31uttohoiylgsyludfgtwh5vx63ockt1gd83a3l4mhb95rkzv67ltjp3n98qtxjx579n9lkm0hd5akveok3180e48oec7hybi0xzog6fs36hz6ne7p6mlhjost3dzmb7fk8wmo7hpr8gu6qxzeufou4kbkeq254bh3wuiyzl4e1dbz7ga03nw9notzusz49y1bt74w930wo1hkm1al3qg985away6ir5nmr9ktjccxl74rxyi22ywu13owhmufhv9os0sp5weeogh4yb3u54417zyb13ac8fwnhuj6l0k4xl5xtz33r9djkid1xz0sbcgk3f6mm87nn0d83dr251ibpgkqqrua3q6s8lnwildbm0dhvxr9bi3mpixqbm2xjbe3ihqafohslgf7p97gwc7e79vs540t0vv4xqzlk3bvvvz04vxf2hjd2qiac46wz04n93gq5vvoal3rbhupjvy500t19j41ys53qnxh9v6jawdbxakcq9exewoagdyd9676o3wrt975j2vybdlzgq622hgn1wlwcg6uxa4dgubk2re2rdggs4ukn8j08dslxb52366mekuudcdh8ogv28656147wfmqifwxzs8y8mck4gosnaeu30k7vmq1hej4zzy7vgn3q8ytl6b7quu6svv91om4llt4g4u76iheqwzdi8jve83zbd9cog78o9zi046mxae7jzo58oswgfjrs6txu4tqkhhhc9lx0w89nqqmq9ioxxrbyg6z027lmw5i6wtjq8qxpywmaiqolkv03y39de4itcicapaqdm78qv55k3o0tut5pdkiqkdkbu09u0m0m2w7as9sygolx8bjp9ueoyg6xvsgakxiyzl7oc5nxp9h1q53xtoskm3n1b2fwt1oa2c4e5o3yqhbs7ibpmcnts2wq6hbdtdnj0g0ddkq67jszdso6rga4p58g39t32uq2164f940inkvtw765pyjfeyztdg7obr448sgl34xowv9aulg96727sd99y1u6hx9kcviiz1dcqfkpg3102i3zoo7z0w9s0ogf8negaz19whm2iv1juy4tl8mlevxjjvvkg1habcdjejto0wj0ol910nwvyl6la6mczvrw0w1284uhd7oxgf8v9l2qizllvi5t4576vnndkgpsvqjgxfyioqqy8fi2cjieuwxmys5hgn30bs73qwxsiu5jzq7tv4y8hz0rzjc3chtjhnbi8jf68r22v13q21r4oi0ttewhikr992p1h9jnux7m17ipksspe54buib4wdthc2avqavs5brc4zz4xqdkjdu1dl5rejvzn6ylkgda0shc6chtibcgmt2rurx8wd9s8gm1g9hpt470bvfuokxbh05xhhnf3n8n64zfd8m1ew05o7nlunwaxvlel16xu5uz2hpvzifegmd4a01h3coketmjcfv1aqancv6eu82y2olde6k57u8zwlzpmvn645lhhae5lo3clf3m0sxnat0op29wxuax0pf5z5xh4fda87hu2kjexuokbmu5sqdx3urc7bzyxhsn9aaviy2ru7a54q4gxiyktgrkpct9p3vf8h7xbs6e3101qm5ota00w2kytcjwcqhbuld3r4k56oi479s3i7xr20y87yokuw9hh7fbd54056ho4rf6detik7wrya6k4ozgb9yr7h8i295gp5l6m7u3inyeawmb3n2wku5bhj735lcp5w81yxjob83zkcjuvdtgfvw26eoboug5oz4tdbyb5a914r5ww9gleac4d3j7wj1eq83lq7dtfi0voizhnzsrrcnzsnba631f8c54mwhvb9tfxgy9viw3cq2zn8s85yd08xbtxh9eiv6e9nkbzu4f3nnydwzyw06nxups2g09nqwc2ia9w8r92idfm7x6uxmtvkg3ql4d0iy4l6lku51tv3o7zbfkoyy6puzlot7h1v9fontmhhpjrd9qyhv3n9vt74706dk4pio9y8ux0txb44ycrd7lptkmshchnf4m9fc4rmj3su756ytwt3o2cog2fh3g70i72b12wvz1j3bu9p1lv372pvuom1493yx4k6n4pvhpzwworqn9ftfiv8t2gsu1on46w4tzzmhew6rza8gnin3zowjua6ueeakk5a7h19uonpa9ujv0wp0u0cvdtps4e5yfg7kzyf7uskq7yk3cqouopl4s89gf13czye4vmn3p6pmwzu2ics1ahuzl558f3pnl6abq1clie0pcpwt3ojhp2cpxkoz7q9op2ogc96sokzp9y4mg31v264pwj28dwfbp7rvw869rl4zxukdlj2me4nd9i60t99jccp8zjcjp0krzi0997v7e1kkrrrk2ruthahp0cyleh9wmalkde4f4m4e2b9cy4sqi0phbzvq4gjecezgwwvelviz7rv4jvxwv3utvuszhg25c2wzp0n5kkf3xfn1z3gyb0q274ohwe7dl3141ej29xszc52z07z2ckon5fegkac7m5ui2p2lruzcqy18c08s52jdjz15t4k0cv7cnz8lmwldac5e3rq5fq8ueroftaxntfe9428gzkeeiyeo8m3bmi742cjtvh8r8xrv8ek347m4oo7gce7mrt18t1391i6ev018nmz8e56hyteay4gek9s7bjdji36y1zsc9h2qgpj09wbasxpol541la2mhzlmbbqqew5r88g0jdmbqkxhhn 00:08:15.396 06:47:19 -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:08:15.396 06:47:19 -- dd/basic_rw.sh@59 -- # gen_conf 00:08:15.396 06:47:19 -- dd/common.sh@31 -- # xtrace_disable 00:08:15.396 06:47:19 -- common/autotest_common.sh@10 -- # set +x 00:08:15.396 [2024-12-13 06:47:19.712168] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:15.396 [2024-12-13 06:47:19.712294] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70017 ] 00:08:15.396 { 00:08:15.396 "subsystems": [ 00:08:15.396 { 00:08:15.396 "subsystem": "bdev", 00:08:15.396 "config": [ 00:08:15.396 { 00:08:15.396 "params": { 00:08:15.396 "trtype": "pcie", 00:08:15.396 "traddr": "0000:00:06.0", 00:08:15.396 "name": "Nvme0" 00:08:15.396 }, 00:08:15.396 "method": "bdev_nvme_attach_controller" 00:08:15.396 }, 00:08:15.396 { 00:08:15.396 "method": "bdev_wait_for_examine" 00:08:15.396 } 00:08:15.396 ] 00:08:15.396 } 00:08:15.396 ] 00:08:15.396 } 00:08:15.396 [2024-12-13 06:47:19.852843] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.396 [2024-12-13 06:47:19.886180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.654  [2024-12-13T06:47:20.173Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:08:15.654 00:08:15.654 06:47:20 -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:08:15.654 06:47:20 -- dd/basic_rw.sh@65 -- # gen_conf 00:08:15.654 06:47:20 -- dd/common.sh@31 -- # xtrace_disable 00:08:15.654 06:47:20 -- common/autotest_common.sh@10 -- # set +x 00:08:15.913 { 00:08:15.913 "subsystems": [ 00:08:15.913 { 00:08:15.914 "subsystem": "bdev", 00:08:15.914 "config": [ 00:08:15.914 { 00:08:15.914 "params": { 00:08:15.914 "trtype": "pcie", 00:08:15.914 "traddr": "0000:00:06.0", 00:08:15.914 "name": "Nvme0" 00:08:15.914 }, 00:08:15.914 "method": "bdev_nvme_attach_controller" 00:08:15.914 }, 00:08:15.914 { 00:08:15.914 "method": "bdev_wait_for_examine" 00:08:15.914 } 00:08:15.914 ] 00:08:15.914 } 00:08:15.914 ] 00:08:15.914 } 00:08:15.914 [2024-12-13 06:47:20.201200] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:15.914 [2024-12-13 06:47:20.201506] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70035 ] 00:08:15.914 [2024-12-13 06:47:20.339839] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.914 [2024-12-13 06:47:20.372781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.174  [2024-12-13T06:47:20.693Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:08:16.174 00:08:16.174 06:47:20 -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:08:16.174 ************************************ 00:08:16.174 END TEST dd_rw_offset 00:08:16.174 ************************************ 00:08:16.174 06:47:20 -- dd/basic_rw.sh@72 -- # [[ 6d7febb1qxufxwaj5g91rjj4yhdo2xogv8bn4plx7xjm35r7gk76cumbkvtfol6dndwo98njjv7ohq5tgw0rjxcmjkqkeotwml64tfkedgfh28e7xmqri7zy46i527yqk2j5n78nxylgnr8sbrz72mjjf0agq4f1odsch9epgjncfoq831xqjgv63amtnn14qcpacdba8mgkmcf2osslcw6ikyj0v1xxdvgqn4og2d5jyyabtgfvxzv8g219xsgsln3dn06a6rnotwkvzzu758lk817xas2f8oryk3evwojsvwnbya6bn64s6ns7xr0s0kkel9df5c3npf4yuk90mcvx0buswr7jqid96rs7ms0w2bfazrj6kvirs9vyl5xgky6mzfanbzv06jmvef8d1szcym3ayrvo90k1jr3cw68tf2uwu7k73fz36qb66r7raqnvceqjfn570y88vmnrwu675b9znxtk42jebq263pkmew2brbhgt25fzv28qeqahry17tzj10utz8k4n4bwmftredglsoq7q81ahgxv4fooa431vokl76971j1s1237ev3wv1p3j0nmm09qfnu06d8f5n55m62crioffguc1nwg4qe3h4edz3euwwt7ds7dabwhf61b7vi2l5f21eutxty6tvfj1umeuifk0jpscujen0zihmj8lrle2gk5jfxun3dxr2y7fsj6pqasjt8sh8jj2419i4fm8un02ki6mqkntux2t9aa5hewv682aac3mumxj6trkp26k21xhu932s4plhb2lr4twl2knwhzulo8jxnt4j1lomcj0mbb7a6ot88t2q4xacmxyzr94v4v7603ue6l9fs3he4zrh40g42g1kr0vepd85tv6znuqdl2nc0zlc0oq5swulfuqmm8q2gojg2kuxhoytey9m0znno2hpub2zaeus5wwm1148lcusdem8wrmqf0x7agvxs3widn388aa8yjq4q6lg3j42h0257s7tlp12x21d9si9d9gw1ag0oawq72s628bdn5bcxeziubu3nrl7gx8tynarwam4uk5yypv4ucihmj3cae40vqq65c039gezwf55v9ulmza847625xxy2b19bnecftoe5orvp4xq8o0mzeszt8rnckr06bsom83mykdiezb9gipqrc2jutbceaxlu5ijmsuvs1hngpzmpidm5gpm6bwlrswrf16yd02e8hrrcajnvgm5su190piisvzdamll4yjvpe3jv0na9qmpez4ey796fb8kdaviwpvwfh974kjg0nga70owastsz4kp6083t7denvj15us9u1t53jo86q0qnz3z7v9utcacmg80jc31uttohoiylgsyludfgtwh5vx63ockt1gd83a3l4mhb95rkzv67ltjp3n98qtxjx579n9lkm0hd5akveok3180e48oec7hybi0xzog6fs36hz6ne7p6mlhjost3dzmb7fk8wmo7hpr8gu6qxzeufou4kbkeq254bh3wuiyzl4e1dbz7ga03nw9notzusz49y1bt74w930wo1hkm1al3qg985away6ir5nmr9ktjccxl74rxyi22ywu13owhmufhv9os0sp5weeogh4yb3u54417zyb13ac8fwnhuj6l0k4xl5xtz33r9djkid1xz0sbcgk3f6mm87nn0d83dr251ibpgkqqrua3q6s8lnwildbm0dhvxr9bi3mpixqbm2xjbe3ihqafohslgf7p97gwc7e79vs540t0vv4xqzlk3bvvvz04vxf2hjd2qiac46wz04n93gq5vvoal3rbhupjvy500t19j41ys53qnxh9v6jawdbxakcq9exewoagdyd9676o3wrt975j2vybdlzgq622hgn1wlwcg6uxa4dgubk2re2rdggs4ukn8j08dslxb52366mekuudcdh8ogv28656147wfmqifwxzs8y8mck4gosnaeu30k7vmq1hej4zzy7vgn3q8ytl6b7quu6svv91om4llt4g4u76iheqwzdi8jve83zbd9cog78o9zi046mxae7jzo58oswgfjrs6txu4tqkhhhc9lx0w89nqqmq9ioxxrbyg6z027lmw5i6wtjq8qxpywmaiqolkv03y39de4itcicapaqdm78qv55k3o0tut5pdkiqkdkbu09u0m0m2w7as9sygolx8bjp9ueoyg6xvsgakxiyzl7oc5nxp9h1q53xtoskm3n1b2fwt1oa2c4e5o3yqhbs7ibpmcnts2wq6hbdtdnj0g0ddkq67jszdso6rga4p58g39t32uq2164f940inkvtw765pyjfeyztdg7obr448sgl34xowv9aulg96727sd99y1u6hx9kcviiz1dcqfkpg3102i3zoo7z0w9s0ogf8negaz19whm2iv1juy4tl8mlevxjjvvkg1habcdjejto0wj0ol910nwvyl6la6mczvrw0w1284uhd7oxgf8v9l2qizllvi5t4576vnndkgpsvqjgxfyioqqy8fi2cjieuwxmys5hgn30bs73qwxsiu5jzq7tv4y8hz0rzjc3chtjhnbi8jf68r22v13q21r4oi0ttewhikr992p1h9jnux7m17ipksspe54buib4wdthc2avqavs5brc4zz4xqdkjdu1dl5rejvzn6ylkgda0shc6chtibcgmt2rurx8wd9s8gm1g9hpt470bvfuokxbh05xhhnf3n8n64zfd8m1ew05o7nlunwaxvlel16xu5uz2hpvzifegmd4a01h3coketmjcfv1aqancv6eu82y2olde6k57u8zwlzpmvn645lhhae5lo3clf3m0sxnat0op29wxuax0pf5z5xh4fda87hu2kjexuokbmu5sqdx3urc7bzyxhsn9aaviy2ru7a54q4gxiyktgrkpct9p3vf8h7xbs6e3101qm5ota00w2kytcjwcqhbuld3r4k56oi479s3i7xr20y87yokuw9hh7fbd54056ho4rf6detik7wrya6k4ozgb9yr7h8i295gp5l6m7u3inyeawmb3n2wku5bhj735lcp5w81yxjob83zkcjuvdtgfvw26eoboug5oz4tdbyb5a914r5ww9gleac4d3j7wj1eq83lq7dtfi0voizhnzsrrcnzsnba631f8c54mwhvb9tfxgy9viw3cq2zn8s85yd08xbtxh9eiv6e9nkbzu4f3nnydwzyw06nxups2g09nqwc2ia9w8r92idfm7x6uxmtvkg3ql4d0iy4l6lku51tv3o7zbfkoyy6puzlot7h1v9fontmhhpjrd9qyhv3n9vt74706dk4pio9y8ux0txb44ycrd7lptkmshchnf4m9fc4rmj3su756ytwt3o2cog2fh3g70i72b12wvz1j3bu9p1lv372pvuom1493yx4k6n4pvhpzwworqn9ftfiv8t2gsu1on46w4tzzmhew6rza8gnin3zowjua6ueeakk5a7h19uonpa9ujv0wp0u0cvdtps4e5yfg7kzyf7uskq7yk3cqouopl4s89gf13czye4vmn3p6pmwzu2ics1ahuzl558f3pnl6abq1clie0pcpwt3ojhp2cpxkoz7q9op2ogc96sokzp9y4mg31v264pwj28dwfbp7rvw869rl4zxukdlj2me4nd9i60t99jccp8zjcjp0krzi0997v7e1kkrrrk2ruthahp0cyleh9wmalkde4f4m4e2b9cy4sqi0phbzvq4gjecezgwwvelviz7rv4jvxwv3utvuszhg25c2wzp0n5kkf3xfn1z3gyb0q274ohwe7dl3141ej29xszc52z07z2ckon5fegkac7m5ui2p2lruzcqy18c08s52jdjz15t4k0cv7cnz8lmwldac5e3rq5fq8ueroftaxntfe9428gzkeeiyeo8m3bmi742cjtvh8r8xrv8ek347m4oo7gce7mrt18t1391i6ev018nmz8e56hyteay4gek9s7bjdji36y1zsc9h2qgpj09wbasxpol541la2mhzlmbbqqew5r88g0jdmbqkxhhn == \6\d\7\f\e\b\b\1\q\x\u\f\x\w\a\j\5\g\9\1\r\j\j\4\y\h\d\o\2\x\o\g\v\8\b\n\4\p\l\x\7\x\j\m\3\5\r\7\g\k\7\6\c\u\m\b\k\v\t\f\o\l\6\d\n\d\w\o\9\8\n\j\j\v\7\o\h\q\5\t\g\w\0\r\j\x\c\m\j\k\q\k\e\o\t\w\m\l\6\4\t\f\k\e\d\g\f\h\2\8\e\7\x\m\q\r\i\7\z\y\4\6\i\5\2\7\y\q\k\2\j\5\n\7\8\n\x\y\l\g\n\r\8\s\b\r\z\7\2\m\j\j\f\0\a\g\q\4\f\1\o\d\s\c\h\9\e\p\g\j\n\c\f\o\q\8\3\1\x\q\j\g\v\6\3\a\m\t\n\n\1\4\q\c\p\a\c\d\b\a\8\m\g\k\m\c\f\2\o\s\s\l\c\w\6\i\k\y\j\0\v\1\x\x\d\v\g\q\n\4\o\g\2\d\5\j\y\y\a\b\t\g\f\v\x\z\v\8\g\2\1\9\x\s\g\s\l\n\3\d\n\0\6\a\6\r\n\o\t\w\k\v\z\z\u\7\5\8\l\k\8\1\7\x\a\s\2\f\8\o\r\y\k\3\e\v\w\o\j\s\v\w\n\b\y\a\6\b\n\6\4\s\6\n\s\7\x\r\0\s\0\k\k\e\l\9\d\f\5\c\3\n\p\f\4\y\u\k\9\0\m\c\v\x\0\b\u\s\w\r\7\j\q\i\d\9\6\r\s\7\m\s\0\w\2\b\f\a\z\r\j\6\k\v\i\r\s\9\v\y\l\5\x\g\k\y\6\m\z\f\a\n\b\z\v\0\6\j\m\v\e\f\8\d\1\s\z\c\y\m\3\a\y\r\v\o\9\0\k\1\j\r\3\c\w\6\8\t\f\2\u\w\u\7\k\7\3\f\z\3\6\q\b\6\6\r\7\r\a\q\n\v\c\e\q\j\f\n\5\7\0\y\8\8\v\m\n\r\w\u\6\7\5\b\9\z\n\x\t\k\4\2\j\e\b\q\2\6\3\p\k\m\e\w\2\b\r\b\h\g\t\2\5\f\z\v\2\8\q\e\q\a\h\r\y\1\7\t\z\j\1\0\u\t\z\8\k\4\n\4\b\w\m\f\t\r\e\d\g\l\s\o\q\7\q\8\1\a\h\g\x\v\4\f\o\o\a\4\3\1\v\o\k\l\7\6\9\7\1\j\1\s\1\2\3\7\e\v\3\w\v\1\p\3\j\0\n\m\m\0\9\q\f\n\u\0\6\d\8\f\5\n\5\5\m\6\2\c\r\i\o\f\f\g\u\c\1\n\w\g\4\q\e\3\h\4\e\d\z\3\e\u\w\w\t\7\d\s\7\d\a\b\w\h\f\6\1\b\7\v\i\2\l\5\f\2\1\e\u\t\x\t\y\6\t\v\f\j\1\u\m\e\u\i\f\k\0\j\p\s\c\u\j\e\n\0\z\i\h\m\j\8\l\r\l\e\2\g\k\5\j\f\x\u\n\3\d\x\r\2\y\7\f\s\j\6\p\q\a\s\j\t\8\s\h\8\j\j\2\4\1\9\i\4\f\m\8\u\n\0\2\k\i\6\m\q\k\n\t\u\x\2\t\9\a\a\5\h\e\w\v\6\8\2\a\a\c\3\m\u\m\x\j\6\t\r\k\p\2\6\k\2\1\x\h\u\9\3\2\s\4\p\l\h\b\2\l\r\4\t\w\l\2\k\n\w\h\z\u\l\o\8\j\x\n\t\4\j\1\l\o\m\c\j\0\m\b\b\7\a\6\o\t\8\8\t\2\q\4\x\a\c\m\x\y\z\r\9\4\v\4\v\7\6\0\3\u\e\6\l\9\f\s\3\h\e\4\z\r\h\4\0\g\4\2\g\1\k\r\0\v\e\p\d\8\5\t\v\6\z\n\u\q\d\l\2\n\c\0\z\l\c\0\o\q\5\s\w\u\l\f\u\q\m\m\8\q\2\g\o\j\g\2\k\u\x\h\o\y\t\e\y\9\m\0\z\n\n\o\2\h\p\u\b\2\z\a\e\u\s\5\w\w\m\1\1\4\8\l\c\u\s\d\e\m\8\w\r\m\q\f\0\x\7\a\g\v\x\s\3\w\i\d\n\3\8\8\a\a\8\y\j\q\4\q\6\l\g\3\j\4\2\h\0\2\5\7\s\7\t\l\p\1\2\x\2\1\d\9\s\i\9\d\9\g\w\1\a\g\0\o\a\w\q\7\2\s\6\2\8\b\d\n\5\b\c\x\e\z\i\u\b\u\3\n\r\l\7\g\x\8\t\y\n\a\r\w\a\m\4\u\k\5\y\y\p\v\4\u\c\i\h\m\j\3\c\a\e\4\0\v\q\q\6\5\c\0\3\9\g\e\z\w\f\5\5\v\9\u\l\m\z\a\8\4\7\6\2\5\x\x\y\2\b\1\9\b\n\e\c\f\t\o\e\5\o\r\v\p\4\x\q\8\o\0\m\z\e\s\z\t\8\r\n\c\k\r\0\6\b\s\o\m\8\3\m\y\k\d\i\e\z\b\9\g\i\p\q\r\c\2\j\u\t\b\c\e\a\x\l\u\5\i\j\m\s\u\v\s\1\h\n\g\p\z\m\p\i\d\m\5\g\p\m\6\b\w\l\r\s\w\r\f\1\6\y\d\0\2\e\8\h\r\r\c\a\j\n\v\g\m\5\s\u\1\9\0\p\i\i\s\v\z\d\a\m\l\l\4\y\j\v\p\e\3\j\v\0\n\a\9\q\m\p\e\z\4\e\y\7\9\6\f\b\8\k\d\a\v\i\w\p\v\w\f\h\9\7\4\k\j\g\0\n\g\a\7\0\o\w\a\s\t\s\z\4\k\p\6\0\8\3\t\7\d\e\n\v\j\1\5\u\s\9\u\1\t\5\3\j\o\8\6\q\0\q\n\z\3\z\7\v\9\u\t\c\a\c\m\g\8\0\j\c\3\1\u\t\t\o\h\o\i\y\l\g\s\y\l\u\d\f\g\t\w\h\5\v\x\6\3\o\c\k\t\1\g\d\8\3\a\3\l\4\m\h\b\9\5\r\k\z\v\6\7\l\t\j\p\3\n\9\8\q\t\x\j\x\5\7\9\n\9\l\k\m\0\h\d\5\a\k\v\e\o\k\3\1\8\0\e\4\8\o\e\c\7\h\y\b\i\0\x\z\o\g\6\f\s\3\6\h\z\6\n\e\7\p\6\m\l\h\j\o\s\t\3\d\z\m\b\7\f\k\8\w\m\o\7\h\p\r\8\g\u\6\q\x\z\e\u\f\o\u\4\k\b\k\e\q\2\5\4\b\h\3\w\u\i\y\z\l\4\e\1\d\b\z\7\g\a\0\3\n\w\9\n\o\t\z\u\s\z\4\9\y\1\b\t\7\4\w\9\3\0\w\o\1\h\k\m\1\a\l\3\q\g\9\8\5\a\w\a\y\6\i\r\5\n\m\r\9\k\t\j\c\c\x\l\7\4\r\x\y\i\2\2\y\w\u\1\3\o\w\h\m\u\f\h\v\9\o\s\0\s\p\5\w\e\e\o\g\h\4\y\b\3\u\5\4\4\1\7\z\y\b\1\3\a\c\8\f\w\n\h\u\j\6\l\0\k\4\x\l\5\x\t\z\3\3\r\9\d\j\k\i\d\1\x\z\0\s\b\c\g\k\3\f\6\m\m\8\7\n\n\0\d\8\3\d\r\2\5\1\i\b\p\g\k\q\q\r\u\a\3\q\6\s\8\l\n\w\i\l\d\b\m\0\d\h\v\x\r\9\b\i\3\m\p\i\x\q\b\m\2\x\j\b\e\3\i\h\q\a\f\o\h\s\l\g\f\7\p\9\7\g\w\c\7\e\7\9\v\s\5\4\0\t\0\v\v\4\x\q\z\l\k\3\b\v\v\v\z\0\4\v\x\f\2\h\j\d\2\q\i\a\c\4\6\w\z\0\4\n\9\3\g\q\5\v\v\o\a\l\3\r\b\h\u\p\j\v\y\5\0\0\t\1\9\j\4\1\y\s\5\3\q\n\x\h\9\v\6\j\a\w\d\b\x\a\k\c\q\9\e\x\e\w\o\a\g\d\y\d\9\6\7\6\o\3\w\r\t\9\7\5\j\2\v\y\b\d\l\z\g\q\6\2\2\h\g\n\1\w\l\w\c\g\6\u\x\a\4\d\g\u\b\k\2\r\e\2\r\d\g\g\s\4\u\k\n\8\j\0\8\d\s\l\x\b\5\2\3\6\6\m\e\k\u\u\d\c\d\h\8\o\g\v\2\8\6\5\6\1\4\7\w\f\m\q\i\f\w\x\z\s\8\y\8\m\c\k\4\g\o\s\n\a\e\u\3\0\k\7\v\m\q\1\h\e\j\4\z\z\y\7\v\g\n\3\q\8\y\t\l\6\b\7\q\u\u\6\s\v\v\9\1\o\m\4\l\l\t\4\g\4\u\7\6\i\h\e\q\w\z\d\i\8\j\v\e\8\3\z\b\d\9\c\o\g\7\8\o\9\z\i\0\4\6\m\x\a\e\7\j\z\o\5\8\o\s\w\g\f\j\r\s\6\t\x\u\4\t\q\k\h\h\h\c\9\l\x\0\w\8\9\n\q\q\m\q\9\i\o\x\x\r\b\y\g\6\z\0\2\7\l\m\w\5\i\6\w\t\j\q\8\q\x\p\y\w\m\a\i\q\o\l\k\v\0\3\y\3\9\d\e\4\i\t\c\i\c\a\p\a\q\d\m\7\8\q\v\5\5\k\3\o\0\t\u\t\5\p\d\k\i\q\k\d\k\b\u\0\9\u\0\m\0\m\2\w\7\a\s\9\s\y\g\o\l\x\8\b\j\p\9\u\e\o\y\g\6\x\v\s\g\a\k\x\i\y\z\l\7\o\c\5\n\x\p\9\h\1\q\5\3\x\t\o\s\k\m\3\n\1\b\2\f\w\t\1\o\a\2\c\4\e\5\o\3\y\q\h\b\s\7\i\b\p\m\c\n\t\s\2\w\q\6\h\b\d\t\d\n\j\0\g\0\d\d\k\q\6\7\j\s\z\d\s\o\6\r\g\a\4\p\5\8\g\3\9\t\3\2\u\q\2\1\6\4\f\9\4\0\i\n\k\v\t\w\7\6\5\p\y\j\f\e\y\z\t\d\g\7\o\b\r\4\4\8\s\g\l\3\4\x\o\w\v\9\a\u\l\g\9\6\7\2\7\s\d\9\9\y\1\u\6\h\x\9\k\c\v\i\i\z\1\d\c\q\f\k\p\g\3\1\0\2\i\3\z\o\o\7\z\0\w\9\s\0\o\g\f\8\n\e\g\a\z\1\9\w\h\m\2\i\v\1\j\u\y\4\t\l\8\m\l\e\v\x\j\j\v\v\k\g\1\h\a\b\c\d\j\e\j\t\o\0\w\j\0\o\l\9\1\0\n\w\v\y\l\6\l\a\6\m\c\z\v\r\w\0\w\1\2\8\4\u\h\d\7\o\x\g\f\8\v\9\l\2\q\i\z\l\l\v\i\5\t\4\5\7\6\v\n\n\d\k\g\p\s\v\q\j\g\x\f\y\i\o\q\q\y\8\f\i\2\c\j\i\e\u\w\x\m\y\s\5\h\g\n\3\0\b\s\7\3\q\w\x\s\i\u\5\j\z\q\7\t\v\4\y\8\h\z\0\r\z\j\c\3\c\h\t\j\h\n\b\i\8\j\f\6\8\r\2\2\v\1\3\q\2\1\r\4\o\i\0\t\t\e\w\h\i\k\r\9\9\2\p\1\h\9\j\n\u\x\7\m\1\7\i\p\k\s\s\p\e\5\4\b\u\i\b\4\w\d\t\h\c\2\a\v\q\a\v\s\5\b\r\c\4\z\z\4\x\q\d\k\j\d\u\1\d\l\5\r\e\j\v\z\n\6\y\l\k\g\d\a\0\s\h\c\6\c\h\t\i\b\c\g\m\t\2\r\u\r\x\8\w\d\9\s\8\g\m\1\g\9\h\p\t\4\7\0\b\v\f\u\o\k\x\b\h\0\5\x\h\h\n\f\3\n\8\n\6\4\z\f\d\8\m\1\e\w\0\5\o\7\n\l\u\n\w\a\x\v\l\e\l\1\6\x\u\5\u\z\2\h\p\v\z\i\f\e\g\m\d\4\a\0\1\h\3\c\o\k\e\t\m\j\c\f\v\1\a\q\a\n\c\v\6\e\u\8\2\y\2\o\l\d\e\6\k\5\7\u\8\z\w\l\z\p\m\v\n\6\4\5\l\h\h\a\e\5\l\o\3\c\l\f\3\m\0\s\x\n\a\t\0\o\p\2\9\w\x\u\a\x\0\p\f\5\z\5\x\h\4\f\d\a\8\7\h\u\2\k\j\e\x\u\o\k\b\m\u\5\s\q\d\x\3\u\r\c\7\b\z\y\x\h\s\n\9\a\a\v\i\y\2\r\u\7\a\5\4\q\4\g\x\i\y\k\t\g\r\k\p\c\t\9\p\3\v\f\8\h\7\x\b\s\6\e\3\1\0\1\q\m\5\o\t\a\0\0\w\2\k\y\t\c\j\w\c\q\h\b\u\l\d\3\r\4\k\5\6\o\i\4\7\9\s\3\i\7\x\r\2\0\y\8\7\y\o\k\u\w\9\h\h\7\f\b\d\5\4\0\5\6\h\o\4\r\f\6\d\e\t\i\k\7\w\r\y\a\6\k\4\o\z\g\b\9\y\r\7\h\8\i\2\9\5\g\p\5\l\6\m\7\u\3\i\n\y\e\a\w\m\b\3\n\2\w\k\u\5\b\h\j\7\3\5\l\c\p\5\w\8\1\y\x\j\o\b\8\3\z\k\c\j\u\v\d\t\g\f\v\w\2\6\e\o\b\o\u\g\5\o\z\4\t\d\b\y\b\5\a\9\1\4\r\5\w\w\9\g\l\e\a\c\4\d\3\j\7\w\j\1\e\q\8\3\l\q\7\d\t\f\i\0\v\o\i\z\h\n\z\s\r\r\c\n\z\s\n\b\a\6\3\1\f\8\c\5\4\m\w\h\v\b\9\t\f\x\g\y\9\v\i\w\3\c\q\2\z\n\8\s\8\5\y\d\0\8\x\b\t\x\h\9\e\i\v\6\e\9\n\k\b\z\u\4\f\3\n\n\y\d\w\z\y\w\0\6\n\x\u\p\s\2\g\0\9\n\q\w\c\2\i\a\9\w\8\r\9\2\i\d\f\m\7\x\6\u\x\m\t\v\k\g\3\q\l\4\d\0\i\y\4\l\6\l\k\u\5\1\t\v\3\o\7\z\b\f\k\o\y\y\6\p\u\z\l\o\t\7\h\1\v\9\f\o\n\t\m\h\h\p\j\r\d\9\q\y\h\v\3\n\9\v\t\7\4\7\0\6\d\k\4\p\i\o\9\y\8\u\x\0\t\x\b\4\4\y\c\r\d\7\l\p\t\k\m\s\h\c\h\n\f\4\m\9\f\c\4\r\m\j\3\s\u\7\5\6\y\t\w\t\3\o\2\c\o\g\2\f\h\3\g\7\0\i\7\2\b\1\2\w\v\z\1\j\3\b\u\9\p\1\l\v\3\7\2\p\v\u\o\m\1\4\9\3\y\x\4\k\6\n\4\p\v\h\p\z\w\w\o\r\q\n\9\f\t\f\i\v\8\t\2\g\s\u\1\o\n\4\6\w\4\t\z\z\m\h\e\w\6\r\z\a\8\g\n\i\n\3\z\o\w\j\u\a\6\u\e\e\a\k\k\5\a\7\h\1\9\u\o\n\p\a\9\u\j\v\0\w\p\0\u\0\c\v\d\t\p\s\4\e\5\y\f\g\7\k\z\y\f\7\u\s\k\q\7\y\k\3\c\q\o\u\o\p\l\4\s\8\9\g\f\1\3\c\z\y\e\4\v\m\n\3\p\6\p\m\w\z\u\2\i\c\s\1\a\h\u\z\l\5\5\8\f\3\p\n\l\6\a\b\q\1\c\l\i\e\0\p\c\p\w\t\3\o\j\h\p\2\c\p\x\k\o\z\7\q\9\o\p\2\o\g\c\9\6\s\o\k\z\p\9\y\4\m\g\3\1\v\2\6\4\p\w\j\2\8\d\w\f\b\p\7\r\v\w\8\6\9\r\l\4\z\x\u\k\d\l\j\2\m\e\4\n\d\9\i\6\0\t\9\9\j\c\c\p\8\z\j\c\j\p\0\k\r\z\i\0\9\9\7\v\7\e\1\k\k\r\r\r\k\2\r\u\t\h\a\h\p\0\c\y\l\e\h\9\w\m\a\l\k\d\e\4\f\4\m\4\e\2\b\9\c\y\4\s\q\i\0\p\h\b\z\v\q\4\g\j\e\c\e\z\g\w\w\v\e\l\v\i\z\7\r\v\4\j\v\x\w\v\3\u\t\v\u\s\z\h\g\2\5\c\2\w\z\p\0\n\5\k\k\f\3\x\f\n\1\z\3\g\y\b\0\q\2\7\4\o\h\w\e\7\d\l\3\1\4\1\e\j\2\9\x\s\z\c\5\2\z\0\7\z\2\c\k\o\n\5\f\e\g\k\a\c\7\m\5\u\i\2\p\2\l\r\u\z\c\q\y\1\8\c\0\8\s\5\2\j\d\j\z\1\5\t\4\k\0\c\v\7\c\n\z\8\l\m\w\l\d\a\c\5\e\3\r\q\5\f\q\8\u\e\r\o\f\t\a\x\n\t\f\e\9\4\2\8\g\z\k\e\e\i\y\e\o\8\m\3\b\m\i\7\4\2\c\j\t\v\h\8\r\8\x\r\v\8\e\k\3\4\7\m\4\o\o\7\g\c\e\7\m\r\t\1\8\t\1\3\9\1\i\6\e\v\0\1\8\n\m\z\8\e\5\6\h\y\t\e\a\y\4\g\e\k\9\s\7\b\j\d\j\i\3\6\y\1\z\s\c\9\h\2\q\g\p\j\0\9\w\b\a\s\x\p\o\l\5\4\1\l\a\2\m\h\z\l\m\b\b\q\q\e\w\5\r\8\8\g\0\j\d\m\b\q\k\x\h\h\n ]] 00:08:16.174 00:08:16.175 real 0m1.020s 00:08:16.175 user 0m0.671s 00:08:16.175 sys 0m0.221s 00:08:16.175 06:47:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:16.175 06:47:20 -- common/autotest_common.sh@10 -- # set +x 00:08:16.175 06:47:20 -- dd/basic_rw.sh@1 -- # cleanup 00:08:16.175 06:47:20 -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:08:16.175 06:47:20 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:16.175 06:47:20 -- dd/common.sh@11 -- # local nvme_ref= 00:08:16.175 06:47:20 -- dd/common.sh@12 -- # local size=0xffff 00:08:16.175 06:47:20 -- dd/common.sh@14 -- # local bs=1048576 00:08:16.175 06:47:20 -- dd/common.sh@15 -- # local count=1 00:08:16.175 06:47:20 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:16.175 06:47:20 -- dd/common.sh@18 -- # gen_conf 00:08:16.175 06:47:20 -- dd/common.sh@31 -- # xtrace_disable 00:08:16.175 06:47:20 -- common/autotest_common.sh@10 -- # set +x 00:08:16.434 [2024-12-13 06:47:20.724241] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:16.434 [2024-12-13 06:47:20.724399] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70057 ] 00:08:16.434 { 00:08:16.434 "subsystems": [ 00:08:16.434 { 00:08:16.434 "subsystem": "bdev", 00:08:16.434 "config": [ 00:08:16.434 { 00:08:16.434 "params": { 00:08:16.434 "trtype": "pcie", 00:08:16.434 "traddr": "0000:00:06.0", 00:08:16.434 "name": "Nvme0" 00:08:16.434 }, 00:08:16.434 "method": "bdev_nvme_attach_controller" 00:08:16.434 }, 00:08:16.434 { 00:08:16.434 "method": "bdev_wait_for_examine" 00:08:16.434 } 00:08:16.434 ] 00:08:16.434 } 00:08:16.434 ] 00:08:16.434 } 00:08:16.434 [2024-12-13 06:47:20.864812] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.434 [2024-12-13 06:47:20.898981] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.693  [2024-12-13T06:47:21.212Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:16.693 00:08:16.693 06:47:21 -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:16.693 00:08:16.693 real 0m14.428s 00:08:16.693 user 0m10.213s 00:08:16.693 sys 0m2.821s 00:08:16.693 06:47:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:16.693 06:47:21 -- common/autotest_common.sh@10 -- # set +x 00:08:16.693 ************************************ 00:08:16.693 END TEST spdk_dd_basic_rw 00:08:16.693 ************************************ 00:08:16.693 06:47:21 -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:08:16.693 06:47:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:16.693 06:47:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:16.693 06:47:21 -- common/autotest_common.sh@10 -- # set +x 00:08:16.953 ************************************ 00:08:16.953 START TEST spdk_dd_posix 00:08:16.953 ************************************ 00:08:16.953 06:47:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:08:16.953 * Looking for test storage... 00:08:16.953 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:16.953 06:47:21 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:16.953 06:47:21 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:16.953 06:47:21 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:16.953 06:47:21 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:16.953 06:47:21 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:16.953 06:47:21 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:16.953 06:47:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:16.953 06:47:21 -- scripts/common.sh@335 -- # IFS=.-: 00:08:16.953 06:47:21 -- scripts/common.sh@335 -- # read -ra ver1 00:08:16.953 06:47:21 -- scripts/common.sh@336 -- # IFS=.-: 00:08:16.953 06:47:21 -- scripts/common.sh@336 -- # read -ra ver2 00:08:16.953 06:47:21 -- scripts/common.sh@337 -- # local 'op=<' 00:08:16.953 06:47:21 -- scripts/common.sh@339 -- # ver1_l=2 00:08:16.953 06:47:21 -- scripts/common.sh@340 -- # ver2_l=1 00:08:16.953 06:47:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:16.953 06:47:21 -- scripts/common.sh@343 -- # case "$op" in 00:08:16.953 06:47:21 -- scripts/common.sh@344 -- # : 1 00:08:16.953 06:47:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:16.953 06:47:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:16.953 06:47:21 -- scripts/common.sh@364 -- # decimal 1 00:08:16.953 06:47:21 -- scripts/common.sh@352 -- # local d=1 00:08:16.953 06:47:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:16.953 06:47:21 -- scripts/common.sh@354 -- # echo 1 00:08:16.953 06:47:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:16.953 06:47:21 -- scripts/common.sh@365 -- # decimal 2 00:08:16.953 06:47:21 -- scripts/common.sh@352 -- # local d=2 00:08:16.953 06:47:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:16.953 06:47:21 -- scripts/common.sh@354 -- # echo 2 00:08:16.953 06:47:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:16.953 06:47:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:16.953 06:47:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:16.953 06:47:21 -- scripts/common.sh@367 -- # return 0 00:08:16.953 06:47:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:16.953 06:47:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:16.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.953 --rc genhtml_branch_coverage=1 00:08:16.953 --rc genhtml_function_coverage=1 00:08:16.953 --rc genhtml_legend=1 00:08:16.953 --rc geninfo_all_blocks=1 00:08:16.953 --rc geninfo_unexecuted_blocks=1 00:08:16.953 00:08:16.953 ' 00:08:16.953 06:47:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:16.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.953 --rc genhtml_branch_coverage=1 00:08:16.953 --rc genhtml_function_coverage=1 00:08:16.953 --rc genhtml_legend=1 00:08:16.953 --rc geninfo_all_blocks=1 00:08:16.953 --rc geninfo_unexecuted_blocks=1 00:08:16.953 00:08:16.953 ' 00:08:16.953 06:47:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:16.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.953 --rc genhtml_branch_coverage=1 00:08:16.953 --rc genhtml_function_coverage=1 00:08:16.953 --rc genhtml_legend=1 00:08:16.953 --rc geninfo_all_blocks=1 00:08:16.953 --rc geninfo_unexecuted_blocks=1 00:08:16.953 00:08:16.953 ' 00:08:16.953 06:47:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:16.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.953 --rc genhtml_branch_coverage=1 00:08:16.953 --rc genhtml_function_coverage=1 00:08:16.953 --rc genhtml_legend=1 00:08:16.953 --rc geninfo_all_blocks=1 00:08:16.953 --rc geninfo_unexecuted_blocks=1 00:08:16.953 00:08:16.953 ' 00:08:16.953 06:47:21 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:16.953 06:47:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:16.953 06:47:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:16.953 06:47:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:16.953 06:47:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.953 06:47:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.953 06:47:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.953 06:47:21 -- paths/export.sh@5 -- # export PATH 00:08:16.953 06:47:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.953 06:47:21 -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:08:16.953 06:47:21 -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:08:16.953 06:47:21 -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:08:16.953 06:47:21 -- dd/posix.sh@125 -- # trap cleanup EXIT 00:08:16.953 06:47:21 -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:16.953 06:47:21 -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:16.953 06:47:21 -- dd/posix.sh@130 -- # tests 00:08:16.953 06:47:21 -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:08:16.953 * First test run, liburing in use 00:08:16.953 06:47:21 -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:08:16.953 06:47:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:16.953 06:47:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:16.953 06:47:21 -- common/autotest_common.sh@10 -- # set +x 00:08:16.953 ************************************ 00:08:16.953 START TEST dd_flag_append 00:08:16.953 ************************************ 00:08:16.953 06:47:21 -- common/autotest_common.sh@1114 -- # append 00:08:16.953 06:47:21 -- dd/posix.sh@16 -- # local dump0 00:08:16.953 06:47:21 -- dd/posix.sh@17 -- # local dump1 00:08:16.953 06:47:21 -- dd/posix.sh@19 -- # gen_bytes 32 00:08:16.953 06:47:21 -- dd/common.sh@98 -- # xtrace_disable 00:08:16.953 06:47:21 -- common/autotest_common.sh@10 -- # set +x 00:08:16.953 06:47:21 -- dd/posix.sh@19 -- # dump0=mehpvck6lr61mmks0bl0vf8bltx2qfnz 00:08:16.953 06:47:21 -- dd/posix.sh@20 -- # gen_bytes 32 00:08:16.953 06:47:21 -- dd/common.sh@98 -- # xtrace_disable 00:08:16.953 06:47:21 -- common/autotest_common.sh@10 -- # set +x 00:08:16.953 06:47:21 -- dd/posix.sh@20 -- # dump1=1rj2zsmp02mmas3orm3pptzvz7xtey1g 00:08:16.953 06:47:21 -- dd/posix.sh@22 -- # printf %s mehpvck6lr61mmks0bl0vf8bltx2qfnz 00:08:16.953 06:47:21 -- dd/posix.sh@23 -- # printf %s 1rj2zsmp02mmas3orm3pptzvz7xtey1g 00:08:16.953 06:47:21 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:08:16.953 [2024-12-13 06:47:21.468560] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:16.953 [2024-12-13 06:47:21.468667] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70127 ] 00:08:17.214 [2024-12-13 06:47:21.603864] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.214 [2024-12-13 06:47:21.638524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.214  [2024-12-13T06:47:22.010Z] Copying: 32/32 [B] (average 31 kBps) 00:08:17.491 00:08:17.491 06:47:21 -- dd/posix.sh@27 -- # [[ 1rj2zsmp02mmas3orm3pptzvz7xtey1gmehpvck6lr61mmks0bl0vf8bltx2qfnz == \1\r\j\2\z\s\m\p\0\2\m\m\a\s\3\o\r\m\3\p\p\t\z\v\z\7\x\t\e\y\1\g\m\e\h\p\v\c\k\6\l\r\6\1\m\m\k\s\0\b\l\0\v\f\8\b\l\t\x\2\q\f\n\z ]] 00:08:17.491 00:08:17.491 real 0m0.413s 00:08:17.491 user 0m0.195s 00:08:17.491 sys 0m0.097s 00:08:17.491 06:47:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:17.491 ************************************ 00:08:17.491 END TEST dd_flag_append 00:08:17.491 ************************************ 00:08:17.491 06:47:21 -- common/autotest_common.sh@10 -- # set +x 00:08:17.491 06:47:21 -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:08:17.491 06:47:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:17.491 06:47:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:17.491 06:47:21 -- common/autotest_common.sh@10 -- # set +x 00:08:17.491 ************************************ 00:08:17.491 START TEST dd_flag_directory 00:08:17.491 ************************************ 00:08:17.491 06:47:21 -- common/autotest_common.sh@1114 -- # directory 00:08:17.491 06:47:21 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:17.491 06:47:21 -- common/autotest_common.sh@650 -- # local es=0 00:08:17.491 06:47:21 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:17.491 06:47:21 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:17.491 06:47:21 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:17.491 06:47:21 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:17.491 06:47:21 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:17.491 06:47:21 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:17.491 06:47:21 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:17.491 06:47:21 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:17.491 06:47:21 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:17.491 06:47:21 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:17.491 [2024-12-13 06:47:21.933469] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:17.491 [2024-12-13 06:47:21.933580] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70148 ] 00:08:17.765 [2024-12-13 06:47:22.071104] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.765 [2024-12-13 06:47:22.103739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.765 [2024-12-13 06:47:22.149757] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:17.765 [2024-12-13 06:47:22.149832] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:17.765 [2024-12-13 06:47:22.149861] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:17.765 [2024-12-13 06:47:22.205855] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:17.765 06:47:22 -- common/autotest_common.sh@653 -- # es=236 00:08:17.765 06:47:22 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:17.765 06:47:22 -- common/autotest_common.sh@662 -- # es=108 00:08:17.765 06:47:22 -- common/autotest_common.sh@663 -- # case "$es" in 00:08:17.765 06:47:22 -- common/autotest_common.sh@670 -- # es=1 00:08:17.765 06:47:22 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:17.765 06:47:22 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:17.765 06:47:22 -- common/autotest_common.sh@650 -- # local es=0 00:08:17.765 06:47:22 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:17.765 06:47:22 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:17.765 06:47:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:17.765 06:47:22 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:17.765 06:47:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:17.765 06:47:22 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:17.765 06:47:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:17.765 06:47:22 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:17.765 06:47:22 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:17.765 06:47:22 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:18.024 [2024-12-13 06:47:22.320938] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:18.024 [2024-12-13 06:47:22.321034] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70163 ] 00:08:18.024 [2024-12-13 06:47:22.456103] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.024 [2024-12-13 06:47:22.494068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.024 [2024-12-13 06:47:22.538378] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:18.024 [2024-12-13 06:47:22.538439] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:18.024 [2024-12-13 06:47:22.538468] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:18.283 [2024-12-13 06:47:22.601156] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:18.283 06:47:22 -- common/autotest_common.sh@653 -- # es=236 00:08:18.283 06:47:22 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:18.283 06:47:22 -- common/autotest_common.sh@662 -- # es=108 00:08:18.283 06:47:22 -- common/autotest_common.sh@663 -- # case "$es" in 00:08:18.283 06:47:22 -- common/autotest_common.sh@670 -- # es=1 00:08:18.283 06:47:22 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:18.283 00:08:18.283 real 0m0.784s 00:08:18.283 user 0m0.393s 00:08:18.283 sys 0m0.185s 00:08:18.283 06:47:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:18.283 06:47:22 -- common/autotest_common.sh@10 -- # set +x 00:08:18.283 ************************************ 00:08:18.283 END TEST dd_flag_directory 00:08:18.283 ************************************ 00:08:18.283 06:47:22 -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:08:18.283 06:47:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:18.283 06:47:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:18.283 06:47:22 -- common/autotest_common.sh@10 -- # set +x 00:08:18.283 ************************************ 00:08:18.283 START TEST dd_flag_nofollow 00:08:18.283 ************************************ 00:08:18.283 06:47:22 -- common/autotest_common.sh@1114 -- # nofollow 00:08:18.283 06:47:22 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:18.283 06:47:22 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:18.283 06:47:22 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:18.283 06:47:22 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:18.283 06:47:22 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:18.283 06:47:22 -- common/autotest_common.sh@650 -- # local es=0 00:08:18.283 06:47:22 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:18.283 06:47:22 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.283 06:47:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.283 06:47:22 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.283 06:47:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.283 06:47:22 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.283 06:47:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.283 06:47:22 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.283 06:47:22 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:18.283 06:47:22 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:18.283 [2024-12-13 06:47:22.777431] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:18.283 [2024-12-13 06:47:22.777523] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70186 ] 00:08:18.543 [2024-12-13 06:47:22.916851] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.543 [2024-12-13 06:47:22.953574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.543 [2024-12-13 06:47:22.998979] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:18.543 [2024-12-13 06:47:22.999037] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:18.543 [2024-12-13 06:47:22.999052] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:18.803 [2024-12-13 06:47:23.062754] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:18.803 06:47:23 -- common/autotest_common.sh@653 -- # es=216 00:08:18.803 06:47:23 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:18.803 06:47:23 -- common/autotest_common.sh@662 -- # es=88 00:08:18.803 06:47:23 -- common/autotest_common.sh@663 -- # case "$es" in 00:08:18.803 06:47:23 -- common/autotest_common.sh@670 -- # es=1 00:08:18.803 06:47:23 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:18.803 06:47:23 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:18.803 06:47:23 -- common/autotest_common.sh@650 -- # local es=0 00:08:18.803 06:47:23 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:18.803 06:47:23 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.803 06:47:23 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.803 06:47:23 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.803 06:47:23 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.803 06:47:23 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.803 06:47:23 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.803 06:47:23 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.803 06:47:23 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:18.803 06:47:23 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:18.803 [2024-12-13 06:47:23.180527] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:18.803 [2024-12-13 06:47:23.180621] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70196 ] 00:08:18.803 [2024-12-13 06:47:23.320602] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.062 [2024-12-13 06:47:23.364487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.062 [2024-12-13 06:47:23.415401] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:19.062 [2024-12-13 06:47:23.415480] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:19.062 [2024-12-13 06:47:23.415510] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:19.062 [2024-12-13 06:47:23.471461] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:19.062 06:47:23 -- common/autotest_common.sh@653 -- # es=216 00:08:19.063 06:47:23 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:19.063 06:47:23 -- common/autotest_common.sh@662 -- # es=88 00:08:19.063 06:47:23 -- common/autotest_common.sh@663 -- # case "$es" in 00:08:19.063 06:47:23 -- common/autotest_common.sh@670 -- # es=1 00:08:19.063 06:47:23 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:19.063 06:47:23 -- dd/posix.sh@46 -- # gen_bytes 512 00:08:19.063 06:47:23 -- dd/common.sh@98 -- # xtrace_disable 00:08:19.063 06:47:23 -- common/autotest_common.sh@10 -- # set +x 00:08:19.063 06:47:23 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:19.322 [2024-12-13 06:47:23.590254] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:19.322 [2024-12-13 06:47:23.590417] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70203 ] 00:08:19.322 [2024-12-13 06:47:23.728577] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.322 [2024-12-13 06:47:23.760270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.322  [2024-12-13T06:47:24.100Z] Copying: 512/512 [B] (average 500 kBps) 00:08:19.581 00:08:19.581 06:47:23 -- dd/posix.sh@49 -- # [[ 361u84898fjx4hlkijyxc732fdjyscq7ht5n4k12hsitguivcuinknc2748tu91iozq01jfiwdkhdpgrsd6k2zvkx25up5bq13z2z3g8i3q8ml5qkw6m28kunekv7chljp56gv05jfge3qvcm0eqnz1ftofsaysct6lkgk252wfoybv9cwbqplzg5ybjdc1p8meovtuek8jug9tiihyd7n2lt3i1fxftzupvhkq3qyiyc90wtz1pbsj4ojklgjs2b0q0vhq1fzn0fcwfzwh8piz3pfwuz98gu8sthldez10j624ol4docmzjrl8rf053e6s4ipf4z0kdm2urmwf5ep3fh85klhchsvaadnw0wu2wys4xteyvjq95ksmm9yz3uls8lokgw3dqzq1aekf1hwcr5prdzi12ggzdzy7wq4vcjtt8cuh5c5382bswxo1ydhmpdwj572l3j6l5abnkhhombzezfuumor85det1erpwjz2mptv6rf70es6wdr2a == \3\6\1\u\8\4\8\9\8\f\j\x\4\h\l\k\i\j\y\x\c\7\3\2\f\d\j\y\s\c\q\7\h\t\5\n\4\k\1\2\h\s\i\t\g\u\i\v\c\u\i\n\k\n\c\2\7\4\8\t\u\9\1\i\o\z\q\0\1\j\f\i\w\d\k\h\d\p\g\r\s\d\6\k\2\z\v\k\x\2\5\u\p\5\b\q\1\3\z\2\z\3\g\8\i\3\q\8\m\l\5\q\k\w\6\m\2\8\k\u\n\e\k\v\7\c\h\l\j\p\5\6\g\v\0\5\j\f\g\e\3\q\v\c\m\0\e\q\n\z\1\f\t\o\f\s\a\y\s\c\t\6\l\k\g\k\2\5\2\w\f\o\y\b\v\9\c\w\b\q\p\l\z\g\5\y\b\j\d\c\1\p\8\m\e\o\v\t\u\e\k\8\j\u\g\9\t\i\i\h\y\d\7\n\2\l\t\3\i\1\f\x\f\t\z\u\p\v\h\k\q\3\q\y\i\y\c\9\0\w\t\z\1\p\b\s\j\4\o\j\k\l\g\j\s\2\b\0\q\0\v\h\q\1\f\z\n\0\f\c\w\f\z\w\h\8\p\i\z\3\p\f\w\u\z\9\8\g\u\8\s\t\h\l\d\e\z\1\0\j\6\2\4\o\l\4\d\o\c\m\z\j\r\l\8\r\f\0\5\3\e\6\s\4\i\p\f\4\z\0\k\d\m\2\u\r\m\w\f\5\e\p\3\f\h\8\5\k\l\h\c\h\s\v\a\a\d\n\w\0\w\u\2\w\y\s\4\x\t\e\y\v\j\q\9\5\k\s\m\m\9\y\z\3\u\l\s\8\l\o\k\g\w\3\d\q\z\q\1\a\e\k\f\1\h\w\c\r\5\p\r\d\z\i\1\2\g\g\z\d\z\y\7\w\q\4\v\c\j\t\t\8\c\u\h\5\c\5\3\8\2\b\s\w\x\o\1\y\d\h\m\p\d\w\j\5\7\2\l\3\j\6\l\5\a\b\n\k\h\h\o\m\b\z\e\z\f\u\u\m\o\r\8\5\d\e\t\1\e\r\p\w\j\z\2\m\p\t\v\6\r\f\7\0\e\s\6\w\d\r\2\a ]] 00:08:19.581 00:08:19.581 real 0m1.209s 00:08:19.581 user 0m0.594s 00:08:19.581 sys 0m0.290s 00:08:19.581 06:47:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:19.581 ************************************ 00:08:19.581 END TEST dd_flag_nofollow 00:08:19.581 06:47:23 -- common/autotest_common.sh@10 -- # set +x 00:08:19.581 ************************************ 00:08:19.581 06:47:23 -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:08:19.581 06:47:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:19.581 06:47:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:19.581 06:47:23 -- common/autotest_common.sh@10 -- # set +x 00:08:19.581 ************************************ 00:08:19.581 START TEST dd_flag_noatime 00:08:19.581 ************************************ 00:08:19.581 06:47:23 -- common/autotest_common.sh@1114 -- # noatime 00:08:19.581 06:47:23 -- dd/posix.sh@53 -- # local atime_if 00:08:19.581 06:47:23 -- dd/posix.sh@54 -- # local atime_of 00:08:19.581 06:47:23 -- dd/posix.sh@58 -- # gen_bytes 512 00:08:19.581 06:47:23 -- dd/common.sh@98 -- # xtrace_disable 00:08:19.581 06:47:23 -- common/autotest_common.sh@10 -- # set +x 00:08:19.581 06:47:23 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:19.581 06:47:23 -- dd/posix.sh@60 -- # atime_if=1734072443 00:08:19.581 06:47:23 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:19.581 06:47:23 -- dd/posix.sh@61 -- # atime_of=1734072443 00:08:19.581 06:47:23 -- dd/posix.sh@66 -- # sleep 1 00:08:20.518 06:47:25 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:20.777 [2024-12-13 06:47:25.057968] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:20.777 [2024-12-13 06:47:25.058086] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70238 ] 00:08:20.777 [2024-12-13 06:47:25.199602] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.777 [2024-12-13 06:47:25.249645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.035  [2024-12-13T06:47:25.554Z] Copying: 512/512 [B] (average 500 kBps) 00:08:21.035 00:08:21.035 06:47:25 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:21.035 06:47:25 -- dd/posix.sh@69 -- # (( atime_if == 1734072443 )) 00:08:21.036 06:47:25 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:21.036 06:47:25 -- dd/posix.sh@70 -- # (( atime_of == 1734072443 )) 00:08:21.036 06:47:25 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:21.036 [2024-12-13 06:47:25.529434] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:21.036 [2024-12-13 06:47:25.529530] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70255 ] 00:08:21.294 [2024-12-13 06:47:25.669257] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.294 [2024-12-13 06:47:25.713547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.294  [2024-12-13T06:47:26.073Z] Copying: 512/512 [B] (average 500 kBps) 00:08:21.554 00:08:21.554 06:47:25 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:21.554 06:47:25 -- dd/posix.sh@73 -- # (( atime_if < 1734072445 )) 00:08:21.554 00:08:21.554 real 0m1.951s 00:08:21.554 user 0m0.473s 00:08:21.554 sys 0m0.239s 00:08:21.554 06:47:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:21.554 ************************************ 00:08:21.554 06:47:25 -- common/autotest_common.sh@10 -- # set +x 00:08:21.554 END TEST dd_flag_noatime 00:08:21.554 ************************************ 00:08:21.554 06:47:25 -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:08:21.554 06:47:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:21.554 06:47:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:21.554 06:47:25 -- common/autotest_common.sh@10 -- # set +x 00:08:21.554 ************************************ 00:08:21.554 START TEST dd_flags_misc 00:08:21.554 ************************************ 00:08:21.554 06:47:25 -- common/autotest_common.sh@1114 -- # io 00:08:21.554 06:47:25 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:08:21.554 06:47:25 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:08:21.554 06:47:25 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:08:21.554 06:47:25 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:21.554 06:47:25 -- dd/posix.sh@86 -- # gen_bytes 512 00:08:21.554 06:47:25 -- dd/common.sh@98 -- # xtrace_disable 00:08:21.554 06:47:25 -- common/autotest_common.sh@10 -- # set +x 00:08:21.554 06:47:25 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:21.554 06:47:25 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:21.554 [2024-12-13 06:47:26.046177] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:21.554 [2024-12-13 06:47:26.046275] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70276 ] 00:08:21.813 [2024-12-13 06:47:26.187565] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.813 [2024-12-13 06:47:26.234171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.813  [2024-12-13T06:47:26.591Z] Copying: 512/512 [B] (average 500 kBps) 00:08:22.072 00:08:22.072 06:47:26 -- dd/posix.sh@93 -- # [[ ndxwse0zc5nxjuc8xek88dfjf2skky6h4qbqsj3fru07xths7xay56g5sm1ijsn2sbgvfskeb8na10qcrv0mvrrww2jyrxebizlx6pc22noj0irf9prt4uccd07o5fd8hxj0iq4gmks4182rh70m8j0r5y1nfep73rf8gwhf9xkl59l98jgipv8jz5t5ch9eu01ntzjtiy5uydjl1fnlnoei5a7aa890bnc90g0ze6bk1yz7ircp699tpe4e085t6nhcqg7jw2m7h522ew8scmzlh7392vkmaxlxh5giwplheyk5cljgcnepfmknj7qk8rdzok3fajyh19flkvjfk1to6cnz6zjo965wlin80a487uhnp3p238p9mu6tab2nj479snfw687i1e0ofzixib5du8noml0ro64vcpxj9cfkp4qtf27jacointf8er843g8a34brwvq2ku13hr0r2ipcgwgp8b550wc7ti99ray5krwt76ewnfu67ld87ux4 == \n\d\x\w\s\e\0\z\c\5\n\x\j\u\c\8\x\e\k\8\8\d\f\j\f\2\s\k\k\y\6\h\4\q\b\q\s\j\3\f\r\u\0\7\x\t\h\s\7\x\a\y\5\6\g\5\s\m\1\i\j\s\n\2\s\b\g\v\f\s\k\e\b\8\n\a\1\0\q\c\r\v\0\m\v\r\r\w\w\2\j\y\r\x\e\b\i\z\l\x\6\p\c\2\2\n\o\j\0\i\r\f\9\p\r\t\4\u\c\c\d\0\7\o\5\f\d\8\h\x\j\0\i\q\4\g\m\k\s\4\1\8\2\r\h\7\0\m\8\j\0\r\5\y\1\n\f\e\p\7\3\r\f\8\g\w\h\f\9\x\k\l\5\9\l\9\8\j\g\i\p\v\8\j\z\5\t\5\c\h\9\e\u\0\1\n\t\z\j\t\i\y\5\u\y\d\j\l\1\f\n\l\n\o\e\i\5\a\7\a\a\8\9\0\b\n\c\9\0\g\0\z\e\6\b\k\1\y\z\7\i\r\c\p\6\9\9\t\p\e\4\e\0\8\5\t\6\n\h\c\q\g\7\j\w\2\m\7\h\5\2\2\e\w\8\s\c\m\z\l\h\7\3\9\2\v\k\m\a\x\l\x\h\5\g\i\w\p\l\h\e\y\k\5\c\l\j\g\c\n\e\p\f\m\k\n\j\7\q\k\8\r\d\z\o\k\3\f\a\j\y\h\1\9\f\l\k\v\j\f\k\1\t\o\6\c\n\z\6\z\j\o\9\6\5\w\l\i\n\8\0\a\4\8\7\u\h\n\p\3\p\2\3\8\p\9\m\u\6\t\a\b\2\n\j\4\7\9\s\n\f\w\6\8\7\i\1\e\0\o\f\z\i\x\i\b\5\d\u\8\n\o\m\l\0\r\o\6\4\v\c\p\x\j\9\c\f\k\p\4\q\t\f\2\7\j\a\c\o\i\n\t\f\8\e\r\8\4\3\g\8\a\3\4\b\r\w\v\q\2\k\u\1\3\h\r\0\r\2\i\p\c\g\w\g\p\8\b\5\5\0\w\c\7\t\i\9\9\r\a\y\5\k\r\w\t\7\6\e\w\n\f\u\6\7\l\d\8\7\u\x\4 ]] 00:08:22.072 06:47:26 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:22.073 06:47:26 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:22.073 [2024-12-13 06:47:26.498898] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:22.073 [2024-12-13 06:47:26.499005] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70289 ] 00:08:22.332 [2024-12-13 06:47:26.639143] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.332 [2024-12-13 06:47:26.680397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.332  [2024-12-13T06:47:27.110Z] Copying: 512/512 [B] (average 500 kBps) 00:08:22.591 00:08:22.591 06:47:26 -- dd/posix.sh@93 -- # [[ ndxwse0zc5nxjuc8xek88dfjf2skky6h4qbqsj3fru07xths7xay56g5sm1ijsn2sbgvfskeb8na10qcrv0mvrrww2jyrxebizlx6pc22noj0irf9prt4uccd07o5fd8hxj0iq4gmks4182rh70m8j0r5y1nfep73rf8gwhf9xkl59l98jgipv8jz5t5ch9eu01ntzjtiy5uydjl1fnlnoei5a7aa890bnc90g0ze6bk1yz7ircp699tpe4e085t6nhcqg7jw2m7h522ew8scmzlh7392vkmaxlxh5giwplheyk5cljgcnepfmknj7qk8rdzok3fajyh19flkvjfk1to6cnz6zjo965wlin80a487uhnp3p238p9mu6tab2nj479snfw687i1e0ofzixib5du8noml0ro64vcpxj9cfkp4qtf27jacointf8er843g8a34brwvq2ku13hr0r2ipcgwgp8b550wc7ti99ray5krwt76ewnfu67ld87ux4 == \n\d\x\w\s\e\0\z\c\5\n\x\j\u\c\8\x\e\k\8\8\d\f\j\f\2\s\k\k\y\6\h\4\q\b\q\s\j\3\f\r\u\0\7\x\t\h\s\7\x\a\y\5\6\g\5\s\m\1\i\j\s\n\2\s\b\g\v\f\s\k\e\b\8\n\a\1\0\q\c\r\v\0\m\v\r\r\w\w\2\j\y\r\x\e\b\i\z\l\x\6\p\c\2\2\n\o\j\0\i\r\f\9\p\r\t\4\u\c\c\d\0\7\o\5\f\d\8\h\x\j\0\i\q\4\g\m\k\s\4\1\8\2\r\h\7\0\m\8\j\0\r\5\y\1\n\f\e\p\7\3\r\f\8\g\w\h\f\9\x\k\l\5\9\l\9\8\j\g\i\p\v\8\j\z\5\t\5\c\h\9\e\u\0\1\n\t\z\j\t\i\y\5\u\y\d\j\l\1\f\n\l\n\o\e\i\5\a\7\a\a\8\9\0\b\n\c\9\0\g\0\z\e\6\b\k\1\y\z\7\i\r\c\p\6\9\9\t\p\e\4\e\0\8\5\t\6\n\h\c\q\g\7\j\w\2\m\7\h\5\2\2\e\w\8\s\c\m\z\l\h\7\3\9\2\v\k\m\a\x\l\x\h\5\g\i\w\p\l\h\e\y\k\5\c\l\j\g\c\n\e\p\f\m\k\n\j\7\q\k\8\r\d\z\o\k\3\f\a\j\y\h\1\9\f\l\k\v\j\f\k\1\t\o\6\c\n\z\6\z\j\o\9\6\5\w\l\i\n\8\0\a\4\8\7\u\h\n\p\3\p\2\3\8\p\9\m\u\6\t\a\b\2\n\j\4\7\9\s\n\f\w\6\8\7\i\1\e\0\o\f\z\i\x\i\b\5\d\u\8\n\o\m\l\0\r\o\6\4\v\c\p\x\j\9\c\f\k\p\4\q\t\f\2\7\j\a\c\o\i\n\t\f\8\e\r\8\4\3\g\8\a\3\4\b\r\w\v\q\2\k\u\1\3\h\r\0\r\2\i\p\c\g\w\g\p\8\b\5\5\0\w\c\7\t\i\9\9\r\a\y\5\k\r\w\t\7\6\e\w\n\f\u\6\7\l\d\8\7\u\x\4 ]] 00:08:22.591 06:47:26 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:22.591 06:47:26 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:22.591 [2024-12-13 06:47:26.953924] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:22.591 [2024-12-13 06:47:26.954023] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70291 ] 00:08:22.591 [2024-12-13 06:47:27.091876] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.849 [2024-12-13 06:47:27.132969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.849  [2024-12-13T06:47:27.368Z] Copying: 512/512 [B] (average 500 kBps) 00:08:22.849 00:08:22.849 06:47:27 -- dd/posix.sh@93 -- # [[ ndxwse0zc5nxjuc8xek88dfjf2skky6h4qbqsj3fru07xths7xay56g5sm1ijsn2sbgvfskeb8na10qcrv0mvrrww2jyrxebizlx6pc22noj0irf9prt4uccd07o5fd8hxj0iq4gmks4182rh70m8j0r5y1nfep73rf8gwhf9xkl59l98jgipv8jz5t5ch9eu01ntzjtiy5uydjl1fnlnoei5a7aa890bnc90g0ze6bk1yz7ircp699tpe4e085t6nhcqg7jw2m7h522ew8scmzlh7392vkmaxlxh5giwplheyk5cljgcnepfmknj7qk8rdzok3fajyh19flkvjfk1to6cnz6zjo965wlin80a487uhnp3p238p9mu6tab2nj479snfw687i1e0ofzixib5du8noml0ro64vcpxj9cfkp4qtf27jacointf8er843g8a34brwvq2ku13hr0r2ipcgwgp8b550wc7ti99ray5krwt76ewnfu67ld87ux4 == \n\d\x\w\s\e\0\z\c\5\n\x\j\u\c\8\x\e\k\8\8\d\f\j\f\2\s\k\k\y\6\h\4\q\b\q\s\j\3\f\r\u\0\7\x\t\h\s\7\x\a\y\5\6\g\5\s\m\1\i\j\s\n\2\s\b\g\v\f\s\k\e\b\8\n\a\1\0\q\c\r\v\0\m\v\r\r\w\w\2\j\y\r\x\e\b\i\z\l\x\6\p\c\2\2\n\o\j\0\i\r\f\9\p\r\t\4\u\c\c\d\0\7\o\5\f\d\8\h\x\j\0\i\q\4\g\m\k\s\4\1\8\2\r\h\7\0\m\8\j\0\r\5\y\1\n\f\e\p\7\3\r\f\8\g\w\h\f\9\x\k\l\5\9\l\9\8\j\g\i\p\v\8\j\z\5\t\5\c\h\9\e\u\0\1\n\t\z\j\t\i\y\5\u\y\d\j\l\1\f\n\l\n\o\e\i\5\a\7\a\a\8\9\0\b\n\c\9\0\g\0\z\e\6\b\k\1\y\z\7\i\r\c\p\6\9\9\t\p\e\4\e\0\8\5\t\6\n\h\c\q\g\7\j\w\2\m\7\h\5\2\2\e\w\8\s\c\m\z\l\h\7\3\9\2\v\k\m\a\x\l\x\h\5\g\i\w\p\l\h\e\y\k\5\c\l\j\g\c\n\e\p\f\m\k\n\j\7\q\k\8\r\d\z\o\k\3\f\a\j\y\h\1\9\f\l\k\v\j\f\k\1\t\o\6\c\n\z\6\z\j\o\9\6\5\w\l\i\n\8\0\a\4\8\7\u\h\n\p\3\p\2\3\8\p\9\m\u\6\t\a\b\2\n\j\4\7\9\s\n\f\w\6\8\7\i\1\e\0\o\f\z\i\x\i\b\5\d\u\8\n\o\m\l\0\r\o\6\4\v\c\p\x\j\9\c\f\k\p\4\q\t\f\2\7\j\a\c\o\i\n\t\f\8\e\r\8\4\3\g\8\a\3\4\b\r\w\v\q\2\k\u\1\3\h\r\0\r\2\i\p\c\g\w\g\p\8\b\5\5\0\w\c\7\t\i\9\9\r\a\y\5\k\r\w\t\7\6\e\w\n\f\u\6\7\l\d\8\7\u\x\4 ]] 00:08:22.849 06:47:27 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:22.849 06:47:27 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:23.109 [2024-12-13 06:47:27.380651] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:23.109 [2024-12-13 06:47:27.380750] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70299 ] 00:08:23.109 [2024-12-13 06:47:27.520529] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.109 [2024-12-13 06:47:27.561911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.109  [2024-12-13T06:47:27.887Z] Copying: 512/512 [B] (average 500 kBps) 00:08:23.368 00:08:23.368 06:47:27 -- dd/posix.sh@93 -- # [[ ndxwse0zc5nxjuc8xek88dfjf2skky6h4qbqsj3fru07xths7xay56g5sm1ijsn2sbgvfskeb8na10qcrv0mvrrww2jyrxebizlx6pc22noj0irf9prt4uccd07o5fd8hxj0iq4gmks4182rh70m8j0r5y1nfep73rf8gwhf9xkl59l98jgipv8jz5t5ch9eu01ntzjtiy5uydjl1fnlnoei5a7aa890bnc90g0ze6bk1yz7ircp699tpe4e085t6nhcqg7jw2m7h522ew8scmzlh7392vkmaxlxh5giwplheyk5cljgcnepfmknj7qk8rdzok3fajyh19flkvjfk1to6cnz6zjo965wlin80a487uhnp3p238p9mu6tab2nj479snfw687i1e0ofzixib5du8noml0ro64vcpxj9cfkp4qtf27jacointf8er843g8a34brwvq2ku13hr0r2ipcgwgp8b550wc7ti99ray5krwt76ewnfu67ld87ux4 == \n\d\x\w\s\e\0\z\c\5\n\x\j\u\c\8\x\e\k\8\8\d\f\j\f\2\s\k\k\y\6\h\4\q\b\q\s\j\3\f\r\u\0\7\x\t\h\s\7\x\a\y\5\6\g\5\s\m\1\i\j\s\n\2\s\b\g\v\f\s\k\e\b\8\n\a\1\0\q\c\r\v\0\m\v\r\r\w\w\2\j\y\r\x\e\b\i\z\l\x\6\p\c\2\2\n\o\j\0\i\r\f\9\p\r\t\4\u\c\c\d\0\7\o\5\f\d\8\h\x\j\0\i\q\4\g\m\k\s\4\1\8\2\r\h\7\0\m\8\j\0\r\5\y\1\n\f\e\p\7\3\r\f\8\g\w\h\f\9\x\k\l\5\9\l\9\8\j\g\i\p\v\8\j\z\5\t\5\c\h\9\e\u\0\1\n\t\z\j\t\i\y\5\u\y\d\j\l\1\f\n\l\n\o\e\i\5\a\7\a\a\8\9\0\b\n\c\9\0\g\0\z\e\6\b\k\1\y\z\7\i\r\c\p\6\9\9\t\p\e\4\e\0\8\5\t\6\n\h\c\q\g\7\j\w\2\m\7\h\5\2\2\e\w\8\s\c\m\z\l\h\7\3\9\2\v\k\m\a\x\l\x\h\5\g\i\w\p\l\h\e\y\k\5\c\l\j\g\c\n\e\p\f\m\k\n\j\7\q\k\8\r\d\z\o\k\3\f\a\j\y\h\1\9\f\l\k\v\j\f\k\1\t\o\6\c\n\z\6\z\j\o\9\6\5\w\l\i\n\8\0\a\4\8\7\u\h\n\p\3\p\2\3\8\p\9\m\u\6\t\a\b\2\n\j\4\7\9\s\n\f\w\6\8\7\i\1\e\0\o\f\z\i\x\i\b\5\d\u\8\n\o\m\l\0\r\o\6\4\v\c\p\x\j\9\c\f\k\p\4\q\t\f\2\7\j\a\c\o\i\n\t\f\8\e\r\8\4\3\g\8\a\3\4\b\r\w\v\q\2\k\u\1\3\h\r\0\r\2\i\p\c\g\w\g\p\8\b\5\5\0\w\c\7\t\i\9\9\r\a\y\5\k\r\w\t\7\6\e\w\n\f\u\6\7\l\d\8\7\u\x\4 ]] 00:08:23.368 06:47:27 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:23.368 06:47:27 -- dd/posix.sh@86 -- # gen_bytes 512 00:08:23.368 06:47:27 -- dd/common.sh@98 -- # xtrace_disable 00:08:23.368 06:47:27 -- common/autotest_common.sh@10 -- # set +x 00:08:23.368 06:47:27 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:23.368 06:47:27 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:23.368 [2024-12-13 06:47:27.805523] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:23.368 [2024-12-13 06:47:27.805620] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70306 ] 00:08:23.627 [2024-12-13 06:47:27.945870] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.627 [2024-12-13 06:47:27.983442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.627  [2024-12-13T06:47:28.405Z] Copying: 512/512 [B] (average 500 kBps) 00:08:23.886 00:08:23.886 06:47:28 -- dd/posix.sh@93 -- # [[ lcs69pwemrqa5pwep4tel0be0y4t45hzvgac001otjq5wkzt9cjg6qguw3fqhuyt366khl9gho07valk9vlx883jkj848nvpz3tow3lp4s428w4ggc1ge75r59xujbp6hy1dswnweuu1pyzxyk6hr9qf3te7lyplv49b82e20nj7z3cpnkuk7ici3cs3rnc0bc9il8c7mzkdxaa1gcg8t5ukv6k8bfstsb8ffwevvmw0eges4gq2s52jjy8df2mg2ep7rer3eregg07qr5z0y0hzhq82kksd8c4zgdczswzbtu1g0a9kifio7qha4qvwctog8xgi96xc88rb3htqc4dy2muoi6zpgdi4avpi28x8068e3s7tkyyzgvdqgfylxd5vtj7guziuywct19y30ka6cb0r1w25fny18pxx6rl883fndsqw7exqa9otibwk5rp0yeyr0jd11jnwzh7jy1su4w603am8b2aoigsn0fa1s0vv0gw4lhtvepkuue2w == \l\c\s\6\9\p\w\e\m\r\q\a\5\p\w\e\p\4\t\e\l\0\b\e\0\y\4\t\4\5\h\z\v\g\a\c\0\0\1\o\t\j\q\5\w\k\z\t\9\c\j\g\6\q\g\u\w\3\f\q\h\u\y\t\3\6\6\k\h\l\9\g\h\o\0\7\v\a\l\k\9\v\l\x\8\8\3\j\k\j\8\4\8\n\v\p\z\3\t\o\w\3\l\p\4\s\4\2\8\w\4\g\g\c\1\g\e\7\5\r\5\9\x\u\j\b\p\6\h\y\1\d\s\w\n\w\e\u\u\1\p\y\z\x\y\k\6\h\r\9\q\f\3\t\e\7\l\y\p\l\v\4\9\b\8\2\e\2\0\n\j\7\z\3\c\p\n\k\u\k\7\i\c\i\3\c\s\3\r\n\c\0\b\c\9\i\l\8\c\7\m\z\k\d\x\a\a\1\g\c\g\8\t\5\u\k\v\6\k\8\b\f\s\t\s\b\8\f\f\w\e\v\v\m\w\0\e\g\e\s\4\g\q\2\s\5\2\j\j\y\8\d\f\2\m\g\2\e\p\7\r\e\r\3\e\r\e\g\g\0\7\q\r\5\z\0\y\0\h\z\h\q\8\2\k\k\s\d\8\c\4\z\g\d\c\z\s\w\z\b\t\u\1\g\0\a\9\k\i\f\i\o\7\q\h\a\4\q\v\w\c\t\o\g\8\x\g\i\9\6\x\c\8\8\r\b\3\h\t\q\c\4\d\y\2\m\u\o\i\6\z\p\g\d\i\4\a\v\p\i\2\8\x\8\0\6\8\e\3\s\7\t\k\y\y\z\g\v\d\q\g\f\y\l\x\d\5\v\t\j\7\g\u\z\i\u\y\w\c\t\1\9\y\3\0\k\a\6\c\b\0\r\1\w\2\5\f\n\y\1\8\p\x\x\6\r\l\8\8\3\f\n\d\s\q\w\7\e\x\q\a\9\o\t\i\b\w\k\5\r\p\0\y\e\y\r\0\j\d\1\1\j\n\w\z\h\7\j\y\1\s\u\4\w\6\0\3\a\m\8\b\2\a\o\i\g\s\n\0\f\a\1\s\0\v\v\0\g\w\4\l\h\t\v\e\p\k\u\u\e\2\w ]] 00:08:23.886 06:47:28 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:23.886 06:47:28 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:23.886 [2024-12-13 06:47:28.205780] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:23.886 [2024-12-13 06:47:28.206032] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70314 ] 00:08:23.886 [2024-12-13 06:47:28.334407] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.887 [2024-12-13 06:47:28.371691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.146  [2024-12-13T06:47:28.665Z] Copying: 512/512 [B] (average 500 kBps) 00:08:24.146 00:08:24.146 06:47:28 -- dd/posix.sh@93 -- # [[ lcs69pwemrqa5pwep4tel0be0y4t45hzvgac001otjq5wkzt9cjg6qguw3fqhuyt366khl9gho07valk9vlx883jkj848nvpz3tow3lp4s428w4ggc1ge75r59xujbp6hy1dswnweuu1pyzxyk6hr9qf3te7lyplv49b82e20nj7z3cpnkuk7ici3cs3rnc0bc9il8c7mzkdxaa1gcg8t5ukv6k8bfstsb8ffwevvmw0eges4gq2s52jjy8df2mg2ep7rer3eregg07qr5z0y0hzhq82kksd8c4zgdczswzbtu1g0a9kifio7qha4qvwctog8xgi96xc88rb3htqc4dy2muoi6zpgdi4avpi28x8068e3s7tkyyzgvdqgfylxd5vtj7guziuywct19y30ka6cb0r1w25fny18pxx6rl883fndsqw7exqa9otibwk5rp0yeyr0jd11jnwzh7jy1su4w603am8b2aoigsn0fa1s0vv0gw4lhtvepkuue2w == \l\c\s\6\9\p\w\e\m\r\q\a\5\p\w\e\p\4\t\e\l\0\b\e\0\y\4\t\4\5\h\z\v\g\a\c\0\0\1\o\t\j\q\5\w\k\z\t\9\c\j\g\6\q\g\u\w\3\f\q\h\u\y\t\3\6\6\k\h\l\9\g\h\o\0\7\v\a\l\k\9\v\l\x\8\8\3\j\k\j\8\4\8\n\v\p\z\3\t\o\w\3\l\p\4\s\4\2\8\w\4\g\g\c\1\g\e\7\5\r\5\9\x\u\j\b\p\6\h\y\1\d\s\w\n\w\e\u\u\1\p\y\z\x\y\k\6\h\r\9\q\f\3\t\e\7\l\y\p\l\v\4\9\b\8\2\e\2\0\n\j\7\z\3\c\p\n\k\u\k\7\i\c\i\3\c\s\3\r\n\c\0\b\c\9\i\l\8\c\7\m\z\k\d\x\a\a\1\g\c\g\8\t\5\u\k\v\6\k\8\b\f\s\t\s\b\8\f\f\w\e\v\v\m\w\0\e\g\e\s\4\g\q\2\s\5\2\j\j\y\8\d\f\2\m\g\2\e\p\7\r\e\r\3\e\r\e\g\g\0\7\q\r\5\z\0\y\0\h\z\h\q\8\2\k\k\s\d\8\c\4\z\g\d\c\z\s\w\z\b\t\u\1\g\0\a\9\k\i\f\i\o\7\q\h\a\4\q\v\w\c\t\o\g\8\x\g\i\9\6\x\c\8\8\r\b\3\h\t\q\c\4\d\y\2\m\u\o\i\6\z\p\g\d\i\4\a\v\p\i\2\8\x\8\0\6\8\e\3\s\7\t\k\y\y\z\g\v\d\q\g\f\y\l\x\d\5\v\t\j\7\g\u\z\i\u\y\w\c\t\1\9\y\3\0\k\a\6\c\b\0\r\1\w\2\5\f\n\y\1\8\p\x\x\6\r\l\8\8\3\f\n\d\s\q\w\7\e\x\q\a\9\o\t\i\b\w\k\5\r\p\0\y\e\y\r\0\j\d\1\1\j\n\w\z\h\7\j\y\1\s\u\4\w\6\0\3\a\m\8\b\2\a\o\i\g\s\n\0\f\a\1\s\0\v\v\0\g\w\4\l\h\t\v\e\p\k\u\u\e\2\w ]] 00:08:24.146 06:47:28 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:24.146 06:47:28 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:24.146 [2024-12-13 06:47:28.629824] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:24.146 [2024-12-13 06:47:28.630062] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70321 ] 00:08:24.405 [2024-12-13 06:47:28.769255] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.405 [2024-12-13 06:47:28.802457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.405  [2024-12-13T06:47:29.183Z] Copying: 512/512 [B] (average 166 kBps) 00:08:24.664 00:08:24.664 06:47:28 -- dd/posix.sh@93 -- # [[ lcs69pwemrqa5pwep4tel0be0y4t45hzvgac001otjq5wkzt9cjg6qguw3fqhuyt366khl9gho07valk9vlx883jkj848nvpz3tow3lp4s428w4ggc1ge75r59xujbp6hy1dswnweuu1pyzxyk6hr9qf3te7lyplv49b82e20nj7z3cpnkuk7ici3cs3rnc0bc9il8c7mzkdxaa1gcg8t5ukv6k8bfstsb8ffwevvmw0eges4gq2s52jjy8df2mg2ep7rer3eregg07qr5z0y0hzhq82kksd8c4zgdczswzbtu1g0a9kifio7qha4qvwctog8xgi96xc88rb3htqc4dy2muoi6zpgdi4avpi28x8068e3s7tkyyzgvdqgfylxd5vtj7guziuywct19y30ka6cb0r1w25fny18pxx6rl883fndsqw7exqa9otibwk5rp0yeyr0jd11jnwzh7jy1su4w603am8b2aoigsn0fa1s0vv0gw4lhtvepkuue2w == \l\c\s\6\9\p\w\e\m\r\q\a\5\p\w\e\p\4\t\e\l\0\b\e\0\y\4\t\4\5\h\z\v\g\a\c\0\0\1\o\t\j\q\5\w\k\z\t\9\c\j\g\6\q\g\u\w\3\f\q\h\u\y\t\3\6\6\k\h\l\9\g\h\o\0\7\v\a\l\k\9\v\l\x\8\8\3\j\k\j\8\4\8\n\v\p\z\3\t\o\w\3\l\p\4\s\4\2\8\w\4\g\g\c\1\g\e\7\5\r\5\9\x\u\j\b\p\6\h\y\1\d\s\w\n\w\e\u\u\1\p\y\z\x\y\k\6\h\r\9\q\f\3\t\e\7\l\y\p\l\v\4\9\b\8\2\e\2\0\n\j\7\z\3\c\p\n\k\u\k\7\i\c\i\3\c\s\3\r\n\c\0\b\c\9\i\l\8\c\7\m\z\k\d\x\a\a\1\g\c\g\8\t\5\u\k\v\6\k\8\b\f\s\t\s\b\8\f\f\w\e\v\v\m\w\0\e\g\e\s\4\g\q\2\s\5\2\j\j\y\8\d\f\2\m\g\2\e\p\7\r\e\r\3\e\r\e\g\g\0\7\q\r\5\z\0\y\0\h\z\h\q\8\2\k\k\s\d\8\c\4\z\g\d\c\z\s\w\z\b\t\u\1\g\0\a\9\k\i\f\i\o\7\q\h\a\4\q\v\w\c\t\o\g\8\x\g\i\9\6\x\c\8\8\r\b\3\h\t\q\c\4\d\y\2\m\u\o\i\6\z\p\g\d\i\4\a\v\p\i\2\8\x\8\0\6\8\e\3\s\7\t\k\y\y\z\g\v\d\q\g\f\y\l\x\d\5\v\t\j\7\g\u\z\i\u\y\w\c\t\1\9\y\3\0\k\a\6\c\b\0\r\1\w\2\5\f\n\y\1\8\p\x\x\6\r\l\8\8\3\f\n\d\s\q\w\7\e\x\q\a\9\o\t\i\b\w\k\5\r\p\0\y\e\y\r\0\j\d\1\1\j\n\w\z\h\7\j\y\1\s\u\4\w\6\0\3\a\m\8\b\2\a\o\i\g\s\n\0\f\a\1\s\0\v\v\0\g\w\4\l\h\t\v\e\p\k\u\u\e\2\w ]] 00:08:24.664 06:47:28 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:24.664 06:47:28 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:24.664 [2024-12-13 06:47:29.031915] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:24.664 [2024-12-13 06:47:29.032024] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70323 ] 00:08:24.664 [2024-12-13 06:47:29.169865] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.923 [2024-12-13 06:47:29.205758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.923  [2024-12-13T06:47:29.442Z] Copying: 512/512 [B] (average 500 kBps) 00:08:24.923 00:08:24.923 ************************************ 00:08:24.923 END TEST dd_flags_misc 00:08:24.923 ************************************ 00:08:24.923 06:47:29 -- dd/posix.sh@93 -- # [[ lcs69pwemrqa5pwep4tel0be0y4t45hzvgac001otjq5wkzt9cjg6qguw3fqhuyt366khl9gho07valk9vlx883jkj848nvpz3tow3lp4s428w4ggc1ge75r59xujbp6hy1dswnweuu1pyzxyk6hr9qf3te7lyplv49b82e20nj7z3cpnkuk7ici3cs3rnc0bc9il8c7mzkdxaa1gcg8t5ukv6k8bfstsb8ffwevvmw0eges4gq2s52jjy8df2mg2ep7rer3eregg07qr5z0y0hzhq82kksd8c4zgdczswzbtu1g0a9kifio7qha4qvwctog8xgi96xc88rb3htqc4dy2muoi6zpgdi4avpi28x8068e3s7tkyyzgvdqgfylxd5vtj7guziuywct19y30ka6cb0r1w25fny18pxx6rl883fndsqw7exqa9otibwk5rp0yeyr0jd11jnwzh7jy1su4w603am8b2aoigsn0fa1s0vv0gw4lhtvepkuue2w == \l\c\s\6\9\p\w\e\m\r\q\a\5\p\w\e\p\4\t\e\l\0\b\e\0\y\4\t\4\5\h\z\v\g\a\c\0\0\1\o\t\j\q\5\w\k\z\t\9\c\j\g\6\q\g\u\w\3\f\q\h\u\y\t\3\6\6\k\h\l\9\g\h\o\0\7\v\a\l\k\9\v\l\x\8\8\3\j\k\j\8\4\8\n\v\p\z\3\t\o\w\3\l\p\4\s\4\2\8\w\4\g\g\c\1\g\e\7\5\r\5\9\x\u\j\b\p\6\h\y\1\d\s\w\n\w\e\u\u\1\p\y\z\x\y\k\6\h\r\9\q\f\3\t\e\7\l\y\p\l\v\4\9\b\8\2\e\2\0\n\j\7\z\3\c\p\n\k\u\k\7\i\c\i\3\c\s\3\r\n\c\0\b\c\9\i\l\8\c\7\m\z\k\d\x\a\a\1\g\c\g\8\t\5\u\k\v\6\k\8\b\f\s\t\s\b\8\f\f\w\e\v\v\m\w\0\e\g\e\s\4\g\q\2\s\5\2\j\j\y\8\d\f\2\m\g\2\e\p\7\r\e\r\3\e\r\e\g\g\0\7\q\r\5\z\0\y\0\h\z\h\q\8\2\k\k\s\d\8\c\4\z\g\d\c\z\s\w\z\b\t\u\1\g\0\a\9\k\i\f\i\o\7\q\h\a\4\q\v\w\c\t\o\g\8\x\g\i\9\6\x\c\8\8\r\b\3\h\t\q\c\4\d\y\2\m\u\o\i\6\z\p\g\d\i\4\a\v\p\i\2\8\x\8\0\6\8\e\3\s\7\t\k\y\y\z\g\v\d\q\g\f\y\l\x\d\5\v\t\j\7\g\u\z\i\u\y\w\c\t\1\9\y\3\0\k\a\6\c\b\0\r\1\w\2\5\f\n\y\1\8\p\x\x\6\r\l\8\8\3\f\n\d\s\q\w\7\e\x\q\a\9\o\t\i\b\w\k\5\r\p\0\y\e\y\r\0\j\d\1\1\j\n\w\z\h\7\j\y\1\s\u\4\w\6\0\3\a\m\8\b\2\a\o\i\g\s\n\0\f\a\1\s\0\v\v\0\g\w\4\l\h\t\v\e\p\k\u\u\e\2\w ]] 00:08:24.923 00:08:24.923 real 0m3.396s 00:08:24.923 user 0m1.659s 00:08:24.923 sys 0m0.747s 00:08:24.923 06:47:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:24.923 06:47:29 -- common/autotest_common.sh@10 -- # set +x 00:08:24.923 06:47:29 -- dd/posix.sh@131 -- # tests_forced_aio 00:08:24.923 06:47:29 -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:08:24.923 * Second test run, disabling liburing, forcing AIO 00:08:24.923 06:47:29 -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:08:24.923 06:47:29 -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:08:24.923 06:47:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:24.923 06:47:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:24.923 06:47:29 -- common/autotest_common.sh@10 -- # set +x 00:08:24.923 ************************************ 00:08:24.923 START TEST dd_flag_append_forced_aio 00:08:24.923 ************************************ 00:08:24.923 06:47:29 -- common/autotest_common.sh@1114 -- # append 00:08:24.923 06:47:29 -- dd/posix.sh@16 -- # local dump0 00:08:24.923 06:47:29 -- dd/posix.sh@17 -- # local dump1 00:08:24.924 06:47:29 -- dd/posix.sh@19 -- # gen_bytes 32 00:08:24.924 06:47:29 -- dd/common.sh@98 -- # xtrace_disable 00:08:24.924 06:47:29 -- common/autotest_common.sh@10 -- # set +x 00:08:25.182 06:47:29 -- dd/posix.sh@19 -- # dump0=gsslq9c216b21nl3over5p16bkomxvvc 00:08:25.182 06:47:29 -- dd/posix.sh@20 -- # gen_bytes 32 00:08:25.182 06:47:29 -- dd/common.sh@98 -- # xtrace_disable 00:08:25.182 06:47:29 -- common/autotest_common.sh@10 -- # set +x 00:08:25.182 06:47:29 -- dd/posix.sh@20 -- # dump1=7ugshnvpf3d9m33xyviuqk70wpnnus8y 00:08:25.182 06:47:29 -- dd/posix.sh@22 -- # printf %s gsslq9c216b21nl3over5p16bkomxvvc 00:08:25.182 06:47:29 -- dd/posix.sh@23 -- # printf %s 7ugshnvpf3d9m33xyviuqk70wpnnus8y 00:08:25.182 06:47:29 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:08:25.182 [2024-12-13 06:47:29.492162] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:25.182 [2024-12-13 06:47:29.492737] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70350 ] 00:08:25.182 [2024-12-13 06:47:29.628636] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.182 [2024-12-13 06:47:29.663654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.440  [2024-12-13T06:47:29.959Z] Copying: 32/32 [B] (average 31 kBps) 00:08:25.440 00:08:25.440 06:47:29 -- dd/posix.sh@27 -- # [[ 7ugshnvpf3d9m33xyviuqk70wpnnus8ygsslq9c216b21nl3over5p16bkomxvvc == \7\u\g\s\h\n\v\p\f\3\d\9\m\3\3\x\y\v\i\u\q\k\7\0\w\p\n\n\u\s\8\y\g\s\s\l\q\9\c\2\1\6\b\2\1\n\l\3\o\v\e\r\5\p\1\6\b\k\o\m\x\v\v\c ]] 00:08:25.440 00:08:25.440 real 0m0.398s 00:08:25.440 user 0m0.178s 00:08:25.440 sys 0m0.103s 00:08:25.440 06:47:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:25.440 06:47:29 -- common/autotest_common.sh@10 -- # set +x 00:08:25.440 ************************************ 00:08:25.440 END TEST dd_flag_append_forced_aio 00:08:25.440 ************************************ 00:08:25.440 06:47:29 -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:08:25.440 06:47:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:25.440 06:47:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:25.440 06:47:29 -- common/autotest_common.sh@10 -- # set +x 00:08:25.440 ************************************ 00:08:25.440 START TEST dd_flag_directory_forced_aio 00:08:25.440 ************************************ 00:08:25.440 06:47:29 -- common/autotest_common.sh@1114 -- # directory 00:08:25.440 06:47:29 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:25.440 06:47:29 -- common/autotest_common.sh@650 -- # local es=0 00:08:25.440 06:47:29 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:25.440 06:47:29 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:25.440 06:47:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:25.440 06:47:29 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:25.440 06:47:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:25.440 06:47:29 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:25.440 06:47:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:25.440 06:47:29 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:25.440 06:47:29 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:25.440 06:47:29 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:25.440 [2024-12-13 06:47:29.939606] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:25.440 [2024-12-13 06:47:29.939701] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70376 ] 00:08:25.699 [2024-12-13 06:47:30.077959] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.699 [2024-12-13 06:47:30.110141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.699 [2024-12-13 06:47:30.152816] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:25.699 [2024-12-13 06:47:30.152870] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:25.699 [2024-12-13 06:47:30.152898] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:25.699 [2024-12-13 06:47:30.209009] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:25.958 06:47:30 -- common/autotest_common.sh@653 -- # es=236 00:08:25.958 06:47:30 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:25.958 06:47:30 -- common/autotest_common.sh@662 -- # es=108 00:08:25.958 06:47:30 -- common/autotest_common.sh@663 -- # case "$es" in 00:08:25.958 06:47:30 -- common/autotest_common.sh@670 -- # es=1 00:08:25.958 06:47:30 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:25.958 06:47:30 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:25.958 06:47:30 -- common/autotest_common.sh@650 -- # local es=0 00:08:25.958 06:47:30 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:25.958 06:47:30 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:25.958 06:47:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:25.958 06:47:30 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:25.958 06:47:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:25.958 06:47:30 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:25.958 06:47:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:25.958 06:47:30 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:25.958 06:47:30 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:25.958 06:47:30 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:25.958 [2024-12-13 06:47:30.321295] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:25.958 [2024-12-13 06:47:30.321408] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70386 ] 00:08:25.958 [2024-12-13 06:47:30.459950] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.217 [2024-12-13 06:47:30.489676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.217 [2024-12-13 06:47:30.528501] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:26.217 [2024-12-13 06:47:30.528552] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:26.217 [2024-12-13 06:47:30.528580] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:26.217 [2024-12-13 06:47:30.589967] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:26.217 06:47:30 -- common/autotest_common.sh@653 -- # es=236 00:08:26.217 06:47:30 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:26.217 06:47:30 -- common/autotest_common.sh@662 -- # es=108 00:08:26.217 06:47:30 -- common/autotest_common.sh@663 -- # case "$es" in 00:08:26.217 06:47:30 -- common/autotest_common.sh@670 -- # es=1 00:08:26.217 06:47:30 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:26.217 00:08:26.217 real 0m0.762s 00:08:26.217 user 0m0.378s 00:08:26.217 sys 0m0.176s 00:08:26.217 06:47:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:26.217 06:47:30 -- common/autotest_common.sh@10 -- # set +x 00:08:26.217 ************************************ 00:08:26.217 END TEST dd_flag_directory_forced_aio 00:08:26.217 ************************************ 00:08:26.217 06:47:30 -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:08:26.217 06:47:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:26.217 06:47:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:26.217 06:47:30 -- common/autotest_common.sh@10 -- # set +x 00:08:26.217 ************************************ 00:08:26.217 START TEST dd_flag_nofollow_forced_aio 00:08:26.217 ************************************ 00:08:26.217 06:47:30 -- common/autotest_common.sh@1114 -- # nofollow 00:08:26.217 06:47:30 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:26.217 06:47:30 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:26.217 06:47:30 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:26.217 06:47:30 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:26.217 06:47:30 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:26.218 06:47:30 -- common/autotest_common.sh@650 -- # local es=0 00:08:26.218 06:47:30 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:26.218 06:47:30 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:26.218 06:47:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:26.218 06:47:30 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:26.218 06:47:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:26.218 06:47:30 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:26.218 06:47:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:26.218 06:47:30 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:26.218 06:47:30 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:26.218 06:47:30 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:26.477 [2024-12-13 06:47:30.754781] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:26.477 [2024-12-13 06:47:30.754871] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70414 ] 00:08:26.477 [2024-12-13 06:47:30.892817] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.477 [2024-12-13 06:47:30.922435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.477 [2024-12-13 06:47:30.963204] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:26.477 [2024-12-13 06:47:30.963259] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:26.477 [2024-12-13 06:47:30.963286] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:26.736 [2024-12-13 06:47:31.019504] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:26.736 06:47:31 -- common/autotest_common.sh@653 -- # es=216 00:08:26.736 06:47:31 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:26.736 06:47:31 -- common/autotest_common.sh@662 -- # es=88 00:08:26.736 06:47:31 -- common/autotest_common.sh@663 -- # case "$es" in 00:08:26.736 06:47:31 -- common/autotest_common.sh@670 -- # es=1 00:08:26.736 06:47:31 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:26.736 06:47:31 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:26.736 06:47:31 -- common/autotest_common.sh@650 -- # local es=0 00:08:26.736 06:47:31 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:26.736 06:47:31 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:26.736 06:47:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:26.736 06:47:31 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:26.736 06:47:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:26.736 06:47:31 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:26.736 06:47:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:26.736 06:47:31 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:26.736 06:47:31 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:26.736 06:47:31 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:26.736 [2024-12-13 06:47:31.130726] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:26.736 [2024-12-13 06:47:31.130819] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70418 ] 00:08:26.995 [2024-12-13 06:47:31.268855] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.995 [2024-12-13 06:47:31.298989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.995 [2024-12-13 06:47:31.337585] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:26.995 [2024-12-13 06:47:31.337636] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:26.995 [2024-12-13 06:47:31.337665] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:26.995 [2024-12-13 06:47:31.390410] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:26.995 06:47:31 -- common/autotest_common.sh@653 -- # es=216 00:08:26.995 06:47:31 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:26.995 06:47:31 -- common/autotest_common.sh@662 -- # es=88 00:08:26.995 06:47:31 -- common/autotest_common.sh@663 -- # case "$es" in 00:08:26.995 06:47:31 -- common/autotest_common.sh@670 -- # es=1 00:08:26.995 06:47:31 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:26.995 06:47:31 -- dd/posix.sh@46 -- # gen_bytes 512 00:08:26.995 06:47:31 -- dd/common.sh@98 -- # xtrace_disable 00:08:26.995 06:47:31 -- common/autotest_common.sh@10 -- # set +x 00:08:26.995 06:47:31 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:26.995 [2024-12-13 06:47:31.510725] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:26.995 [2024-12-13 06:47:31.511002] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70426 ] 00:08:27.254 [2024-12-13 06:47:31.645163] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.254 [2024-12-13 06:47:31.678937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.254  [2024-12-13T06:47:32.032Z] Copying: 512/512 [B] (average 500 kBps) 00:08:27.513 00:08:27.513 06:47:31 -- dd/posix.sh@49 -- # [[ 2h7fpzuyt3b10rgdmaxo7llvt97hqtc0pwr8yodylur27msitzmki4iqk8apzdtdnugv9s3zglmabm7vcq13g7ckktvk5185dieg3rw4k3ojnpchkqoexqtkvtwqmj7fkneycpflk15w84d0qblkgdctmwwlp9utznvv941lokmrqirpkhsf4f75vmffar4le7vt15v46tjhqpxsxhpheu7m8anckkeepm2z6maeto643hydum24b23s9ahlau0eg5nhx360bvjjjfbeuqbg58le4p049msnffwsx4p8eo1lxvfx718jays7y7lv2ofviuu1gfeb8wuo67wzefo2fg92htarseb6fcsxtg9w1w5vkspez1nlk8vhnmm5fk6hl7yazozw8vg2f0vcw2kaxtp0g1txvpz0kpx8x29jppv7lk9dnztbe72103y6gags849bbdxy7i9ruyhgbdvyvrctdkagiujy73r2uck32pu22hzetiuj9b7801ldyhly == \2\h\7\f\p\z\u\y\t\3\b\1\0\r\g\d\m\a\x\o\7\l\l\v\t\9\7\h\q\t\c\0\p\w\r\8\y\o\d\y\l\u\r\2\7\m\s\i\t\z\m\k\i\4\i\q\k\8\a\p\z\d\t\d\n\u\g\v\9\s\3\z\g\l\m\a\b\m\7\v\c\q\1\3\g\7\c\k\k\t\v\k\5\1\8\5\d\i\e\g\3\r\w\4\k\3\o\j\n\p\c\h\k\q\o\e\x\q\t\k\v\t\w\q\m\j\7\f\k\n\e\y\c\p\f\l\k\1\5\w\8\4\d\0\q\b\l\k\g\d\c\t\m\w\w\l\p\9\u\t\z\n\v\v\9\4\1\l\o\k\m\r\q\i\r\p\k\h\s\f\4\f\7\5\v\m\f\f\a\r\4\l\e\7\v\t\1\5\v\4\6\t\j\h\q\p\x\s\x\h\p\h\e\u\7\m\8\a\n\c\k\k\e\e\p\m\2\z\6\m\a\e\t\o\6\4\3\h\y\d\u\m\2\4\b\2\3\s\9\a\h\l\a\u\0\e\g\5\n\h\x\3\6\0\b\v\j\j\j\f\b\e\u\q\b\g\5\8\l\e\4\p\0\4\9\m\s\n\f\f\w\s\x\4\p\8\e\o\1\l\x\v\f\x\7\1\8\j\a\y\s\7\y\7\l\v\2\o\f\v\i\u\u\1\g\f\e\b\8\w\u\o\6\7\w\z\e\f\o\2\f\g\9\2\h\t\a\r\s\e\b\6\f\c\s\x\t\g\9\w\1\w\5\v\k\s\p\e\z\1\n\l\k\8\v\h\n\m\m\5\f\k\6\h\l\7\y\a\z\o\z\w\8\v\g\2\f\0\v\c\w\2\k\a\x\t\p\0\g\1\t\x\v\p\z\0\k\p\x\8\x\2\9\j\p\p\v\7\l\k\9\d\n\z\t\b\e\7\2\1\0\3\y\6\g\a\g\s\8\4\9\b\b\d\x\y\7\i\9\r\u\y\h\g\b\d\v\y\v\r\c\t\d\k\a\g\i\u\j\y\7\3\r\2\u\c\k\3\2\p\u\2\2\h\z\e\t\i\u\j\9\b\7\8\0\1\l\d\y\h\l\y ]] 00:08:27.513 00:08:27.513 real 0m1.154s 00:08:27.513 user 0m0.567s 00:08:27.513 sys 0m0.258s 00:08:27.513 06:47:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:27.513 06:47:31 -- common/autotest_common.sh@10 -- # set +x 00:08:27.513 ************************************ 00:08:27.513 END TEST dd_flag_nofollow_forced_aio 00:08:27.513 ************************************ 00:08:27.513 06:47:31 -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:08:27.513 06:47:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:27.513 06:47:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:27.513 06:47:31 -- common/autotest_common.sh@10 -- # set +x 00:08:27.513 ************************************ 00:08:27.513 START TEST dd_flag_noatime_forced_aio 00:08:27.513 ************************************ 00:08:27.513 06:47:31 -- common/autotest_common.sh@1114 -- # noatime 00:08:27.513 06:47:31 -- dd/posix.sh@53 -- # local atime_if 00:08:27.513 06:47:31 -- dd/posix.sh@54 -- # local atime_of 00:08:27.513 06:47:31 -- dd/posix.sh@58 -- # gen_bytes 512 00:08:27.513 06:47:31 -- dd/common.sh@98 -- # xtrace_disable 00:08:27.513 06:47:31 -- common/autotest_common.sh@10 -- # set +x 00:08:27.513 06:47:31 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:27.513 06:47:31 -- dd/posix.sh@60 -- # atime_if=1734072451 00:08:27.513 06:47:31 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:27.513 06:47:31 -- dd/posix.sh@61 -- # atime_of=1734072451 00:08:27.513 06:47:31 -- dd/posix.sh@66 -- # sleep 1 00:08:28.470 06:47:32 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:28.470 [2024-12-13 06:47:32.972986] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:28.470 [2024-12-13 06:47:32.973403] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70466 ] 00:08:28.745 [2024-12-13 06:47:33.119935] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.745 [2024-12-13 06:47:33.159136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.745  [2024-12-13T06:47:33.523Z] Copying: 512/512 [B] (average 500 kBps) 00:08:29.004 00:08:29.004 06:47:33 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:29.004 06:47:33 -- dd/posix.sh@69 -- # (( atime_if == 1734072451 )) 00:08:29.004 06:47:33 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:29.004 06:47:33 -- dd/posix.sh@70 -- # (( atime_of == 1734072451 )) 00:08:29.004 06:47:33 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:29.004 [2024-12-13 06:47:33.406963] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:29.004 [2024-12-13 06:47:33.407054] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70478 ] 00:08:29.263 [2024-12-13 06:47:33.542921] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.263 [2024-12-13 06:47:33.572716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.263  [2024-12-13T06:47:33.782Z] Copying: 512/512 [B] (average 500 kBps) 00:08:29.263 00:08:29.263 06:47:33 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:29.263 ************************************ 00:08:29.263 END TEST dd_flag_noatime_forced_aio 00:08:29.263 ************************************ 00:08:29.263 06:47:33 -- dd/posix.sh@73 -- # (( atime_if < 1734072453 )) 00:08:29.263 00:08:29.263 real 0m1.850s 00:08:29.263 user 0m0.405s 00:08:29.263 sys 0m0.204s 00:08:29.263 06:47:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:29.263 06:47:33 -- common/autotest_common.sh@10 -- # set +x 00:08:29.522 06:47:33 -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:08:29.522 06:47:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:29.522 06:47:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:29.522 06:47:33 -- common/autotest_common.sh@10 -- # set +x 00:08:29.522 ************************************ 00:08:29.522 START TEST dd_flags_misc_forced_aio 00:08:29.522 ************************************ 00:08:29.522 06:47:33 -- common/autotest_common.sh@1114 -- # io 00:08:29.522 06:47:33 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:08:29.522 06:47:33 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:08:29.522 06:47:33 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:08:29.522 06:47:33 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:29.522 06:47:33 -- dd/posix.sh@86 -- # gen_bytes 512 00:08:29.522 06:47:33 -- dd/common.sh@98 -- # xtrace_disable 00:08:29.522 06:47:33 -- common/autotest_common.sh@10 -- # set +x 00:08:29.522 06:47:33 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:29.522 06:47:33 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:29.522 [2024-12-13 06:47:33.865942] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:29.522 [2024-12-13 06:47:33.866036] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70504 ] 00:08:29.522 [2024-12-13 06:47:34.004551] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.522 [2024-12-13 06:47:34.033933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.782  [2024-12-13T06:47:34.301Z] Copying: 512/512 [B] (average 500 kBps) 00:08:29.782 00:08:29.782 06:47:34 -- dd/posix.sh@93 -- # [[ l84cz4gtrar4ub9yhw65scnvelp44r3o88t55490lzb1gbntbxspzw7jhm14irebgxokf439siti3l1i2g6qafy6hoaiulte05r0g8oisj489ia41eh7ntemzs91cuqidr8hln1rgy1wqx8wk68b5mw4334r7z6mfxa4s231db44ikffnxxpf28cy9jers15105pnrlr42c46tabuxuqd6lz731o3c0i50rql83c5c8uehj4u4dc87q8gyxzkfi7sq1bfmd4y2rvw3tjc266q98q9zvxeug144ig3fgl09judlfivau6hdsqebb6i8olkd9v70ayqquju3ohjy6qfx014jvvd7wqgdzs8863wntjnnjc5vjwa7zksnw9obj721w1rtjyp3r2k5kv68bq94iyz688i3ly7elz1bdn6xcekad6sqyg9rbufu6fz9e1taktxnpl1wcbesrvk3u4t6u2vu9wgi5be0s2eh29lpr6j2x0qe4da5qbg3qnigcf == \l\8\4\c\z\4\g\t\r\a\r\4\u\b\9\y\h\w\6\5\s\c\n\v\e\l\p\4\4\r\3\o\8\8\t\5\5\4\9\0\l\z\b\1\g\b\n\t\b\x\s\p\z\w\7\j\h\m\1\4\i\r\e\b\g\x\o\k\f\4\3\9\s\i\t\i\3\l\1\i\2\g\6\q\a\f\y\6\h\o\a\i\u\l\t\e\0\5\r\0\g\8\o\i\s\j\4\8\9\i\a\4\1\e\h\7\n\t\e\m\z\s\9\1\c\u\q\i\d\r\8\h\l\n\1\r\g\y\1\w\q\x\8\w\k\6\8\b\5\m\w\4\3\3\4\r\7\z\6\m\f\x\a\4\s\2\3\1\d\b\4\4\i\k\f\f\n\x\x\p\f\2\8\c\y\9\j\e\r\s\1\5\1\0\5\p\n\r\l\r\4\2\c\4\6\t\a\b\u\x\u\q\d\6\l\z\7\3\1\o\3\c\0\i\5\0\r\q\l\8\3\c\5\c\8\u\e\h\j\4\u\4\d\c\8\7\q\8\g\y\x\z\k\f\i\7\s\q\1\b\f\m\d\4\y\2\r\v\w\3\t\j\c\2\6\6\q\9\8\q\9\z\v\x\e\u\g\1\4\4\i\g\3\f\g\l\0\9\j\u\d\l\f\i\v\a\u\6\h\d\s\q\e\b\b\6\i\8\o\l\k\d\9\v\7\0\a\y\q\q\u\j\u\3\o\h\j\y\6\q\f\x\0\1\4\j\v\v\d\7\w\q\g\d\z\s\8\8\6\3\w\n\t\j\n\n\j\c\5\v\j\w\a\7\z\k\s\n\w\9\o\b\j\7\2\1\w\1\r\t\j\y\p\3\r\2\k\5\k\v\6\8\b\q\9\4\i\y\z\6\8\8\i\3\l\y\7\e\l\z\1\b\d\n\6\x\c\e\k\a\d\6\s\q\y\g\9\r\b\u\f\u\6\f\z\9\e\1\t\a\k\t\x\n\p\l\1\w\c\b\e\s\r\v\k\3\u\4\t\6\u\2\v\u\9\w\g\i\5\b\e\0\s\2\e\h\2\9\l\p\r\6\j\2\x\0\q\e\4\d\a\5\q\b\g\3\q\n\i\g\c\f ]] 00:08:29.782 06:47:34 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:29.782 06:47:34 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:29.782 [2024-12-13 06:47:34.271208] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:29.782 [2024-12-13 06:47:34.271800] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70506 ] 00:08:30.041 [2024-12-13 06:47:34.408776] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.041 [2024-12-13 06:47:34.438259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.041  [2024-12-13T06:47:34.819Z] Copying: 512/512 [B] (average 500 kBps) 00:08:30.300 00:08:30.300 06:47:34 -- dd/posix.sh@93 -- # [[ l84cz4gtrar4ub9yhw65scnvelp44r3o88t55490lzb1gbntbxspzw7jhm14irebgxokf439siti3l1i2g6qafy6hoaiulte05r0g8oisj489ia41eh7ntemzs91cuqidr8hln1rgy1wqx8wk68b5mw4334r7z6mfxa4s231db44ikffnxxpf28cy9jers15105pnrlr42c46tabuxuqd6lz731o3c0i50rql83c5c8uehj4u4dc87q8gyxzkfi7sq1bfmd4y2rvw3tjc266q98q9zvxeug144ig3fgl09judlfivau6hdsqebb6i8olkd9v70ayqquju3ohjy6qfx014jvvd7wqgdzs8863wntjnnjc5vjwa7zksnw9obj721w1rtjyp3r2k5kv68bq94iyz688i3ly7elz1bdn6xcekad6sqyg9rbufu6fz9e1taktxnpl1wcbesrvk3u4t6u2vu9wgi5be0s2eh29lpr6j2x0qe4da5qbg3qnigcf == \l\8\4\c\z\4\g\t\r\a\r\4\u\b\9\y\h\w\6\5\s\c\n\v\e\l\p\4\4\r\3\o\8\8\t\5\5\4\9\0\l\z\b\1\g\b\n\t\b\x\s\p\z\w\7\j\h\m\1\4\i\r\e\b\g\x\o\k\f\4\3\9\s\i\t\i\3\l\1\i\2\g\6\q\a\f\y\6\h\o\a\i\u\l\t\e\0\5\r\0\g\8\o\i\s\j\4\8\9\i\a\4\1\e\h\7\n\t\e\m\z\s\9\1\c\u\q\i\d\r\8\h\l\n\1\r\g\y\1\w\q\x\8\w\k\6\8\b\5\m\w\4\3\3\4\r\7\z\6\m\f\x\a\4\s\2\3\1\d\b\4\4\i\k\f\f\n\x\x\p\f\2\8\c\y\9\j\e\r\s\1\5\1\0\5\p\n\r\l\r\4\2\c\4\6\t\a\b\u\x\u\q\d\6\l\z\7\3\1\o\3\c\0\i\5\0\r\q\l\8\3\c\5\c\8\u\e\h\j\4\u\4\d\c\8\7\q\8\g\y\x\z\k\f\i\7\s\q\1\b\f\m\d\4\y\2\r\v\w\3\t\j\c\2\6\6\q\9\8\q\9\z\v\x\e\u\g\1\4\4\i\g\3\f\g\l\0\9\j\u\d\l\f\i\v\a\u\6\h\d\s\q\e\b\b\6\i\8\o\l\k\d\9\v\7\0\a\y\q\q\u\j\u\3\o\h\j\y\6\q\f\x\0\1\4\j\v\v\d\7\w\q\g\d\z\s\8\8\6\3\w\n\t\j\n\n\j\c\5\v\j\w\a\7\z\k\s\n\w\9\o\b\j\7\2\1\w\1\r\t\j\y\p\3\r\2\k\5\k\v\6\8\b\q\9\4\i\y\z\6\8\8\i\3\l\y\7\e\l\z\1\b\d\n\6\x\c\e\k\a\d\6\s\q\y\g\9\r\b\u\f\u\6\f\z\9\e\1\t\a\k\t\x\n\p\l\1\w\c\b\e\s\r\v\k\3\u\4\t\6\u\2\v\u\9\w\g\i\5\b\e\0\s\2\e\h\2\9\l\p\r\6\j\2\x\0\q\e\4\d\a\5\q\b\g\3\q\n\i\g\c\f ]] 00:08:30.300 06:47:34 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:30.300 06:47:34 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:30.300 [2024-12-13 06:47:34.669395] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:30.300 [2024-12-13 06:47:34.669497] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70514 ] 00:08:30.300 [2024-12-13 06:47:34.810591] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.560 [2024-12-13 06:47:34.845090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.560  [2024-12-13T06:47:35.079Z] Copying: 512/512 [B] (average 166 kBps) 00:08:30.560 00:08:30.560 06:47:35 -- dd/posix.sh@93 -- # [[ l84cz4gtrar4ub9yhw65scnvelp44r3o88t55490lzb1gbntbxspzw7jhm14irebgxokf439siti3l1i2g6qafy6hoaiulte05r0g8oisj489ia41eh7ntemzs91cuqidr8hln1rgy1wqx8wk68b5mw4334r7z6mfxa4s231db44ikffnxxpf28cy9jers15105pnrlr42c46tabuxuqd6lz731o3c0i50rql83c5c8uehj4u4dc87q8gyxzkfi7sq1bfmd4y2rvw3tjc266q98q9zvxeug144ig3fgl09judlfivau6hdsqebb6i8olkd9v70ayqquju3ohjy6qfx014jvvd7wqgdzs8863wntjnnjc5vjwa7zksnw9obj721w1rtjyp3r2k5kv68bq94iyz688i3ly7elz1bdn6xcekad6sqyg9rbufu6fz9e1taktxnpl1wcbesrvk3u4t6u2vu9wgi5be0s2eh29lpr6j2x0qe4da5qbg3qnigcf == \l\8\4\c\z\4\g\t\r\a\r\4\u\b\9\y\h\w\6\5\s\c\n\v\e\l\p\4\4\r\3\o\8\8\t\5\5\4\9\0\l\z\b\1\g\b\n\t\b\x\s\p\z\w\7\j\h\m\1\4\i\r\e\b\g\x\o\k\f\4\3\9\s\i\t\i\3\l\1\i\2\g\6\q\a\f\y\6\h\o\a\i\u\l\t\e\0\5\r\0\g\8\o\i\s\j\4\8\9\i\a\4\1\e\h\7\n\t\e\m\z\s\9\1\c\u\q\i\d\r\8\h\l\n\1\r\g\y\1\w\q\x\8\w\k\6\8\b\5\m\w\4\3\3\4\r\7\z\6\m\f\x\a\4\s\2\3\1\d\b\4\4\i\k\f\f\n\x\x\p\f\2\8\c\y\9\j\e\r\s\1\5\1\0\5\p\n\r\l\r\4\2\c\4\6\t\a\b\u\x\u\q\d\6\l\z\7\3\1\o\3\c\0\i\5\0\r\q\l\8\3\c\5\c\8\u\e\h\j\4\u\4\d\c\8\7\q\8\g\y\x\z\k\f\i\7\s\q\1\b\f\m\d\4\y\2\r\v\w\3\t\j\c\2\6\6\q\9\8\q\9\z\v\x\e\u\g\1\4\4\i\g\3\f\g\l\0\9\j\u\d\l\f\i\v\a\u\6\h\d\s\q\e\b\b\6\i\8\o\l\k\d\9\v\7\0\a\y\q\q\u\j\u\3\o\h\j\y\6\q\f\x\0\1\4\j\v\v\d\7\w\q\g\d\z\s\8\8\6\3\w\n\t\j\n\n\j\c\5\v\j\w\a\7\z\k\s\n\w\9\o\b\j\7\2\1\w\1\r\t\j\y\p\3\r\2\k\5\k\v\6\8\b\q\9\4\i\y\z\6\8\8\i\3\l\y\7\e\l\z\1\b\d\n\6\x\c\e\k\a\d\6\s\q\y\g\9\r\b\u\f\u\6\f\z\9\e\1\t\a\k\t\x\n\p\l\1\w\c\b\e\s\r\v\k\3\u\4\t\6\u\2\v\u\9\w\g\i\5\b\e\0\s\2\e\h\2\9\l\p\r\6\j\2\x\0\q\e\4\d\a\5\q\b\g\3\q\n\i\g\c\f ]] 00:08:30.560 06:47:35 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:30.560 06:47:35 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:30.560 [2024-12-13 06:47:35.061507] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:30.560 [2024-12-13 06:47:35.061606] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70521 ] 00:08:30.818 [2024-12-13 06:47:35.198193] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.818 [2024-12-13 06:47:35.229748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.818  [2024-12-13T06:47:35.596Z] Copying: 512/512 [B] (average 500 kBps) 00:08:31.077 00:08:31.077 06:47:35 -- dd/posix.sh@93 -- # [[ l84cz4gtrar4ub9yhw65scnvelp44r3o88t55490lzb1gbntbxspzw7jhm14irebgxokf439siti3l1i2g6qafy6hoaiulte05r0g8oisj489ia41eh7ntemzs91cuqidr8hln1rgy1wqx8wk68b5mw4334r7z6mfxa4s231db44ikffnxxpf28cy9jers15105pnrlr42c46tabuxuqd6lz731o3c0i50rql83c5c8uehj4u4dc87q8gyxzkfi7sq1bfmd4y2rvw3tjc266q98q9zvxeug144ig3fgl09judlfivau6hdsqebb6i8olkd9v70ayqquju3ohjy6qfx014jvvd7wqgdzs8863wntjnnjc5vjwa7zksnw9obj721w1rtjyp3r2k5kv68bq94iyz688i3ly7elz1bdn6xcekad6sqyg9rbufu6fz9e1taktxnpl1wcbesrvk3u4t6u2vu9wgi5be0s2eh29lpr6j2x0qe4da5qbg3qnigcf == \l\8\4\c\z\4\g\t\r\a\r\4\u\b\9\y\h\w\6\5\s\c\n\v\e\l\p\4\4\r\3\o\8\8\t\5\5\4\9\0\l\z\b\1\g\b\n\t\b\x\s\p\z\w\7\j\h\m\1\4\i\r\e\b\g\x\o\k\f\4\3\9\s\i\t\i\3\l\1\i\2\g\6\q\a\f\y\6\h\o\a\i\u\l\t\e\0\5\r\0\g\8\o\i\s\j\4\8\9\i\a\4\1\e\h\7\n\t\e\m\z\s\9\1\c\u\q\i\d\r\8\h\l\n\1\r\g\y\1\w\q\x\8\w\k\6\8\b\5\m\w\4\3\3\4\r\7\z\6\m\f\x\a\4\s\2\3\1\d\b\4\4\i\k\f\f\n\x\x\p\f\2\8\c\y\9\j\e\r\s\1\5\1\0\5\p\n\r\l\r\4\2\c\4\6\t\a\b\u\x\u\q\d\6\l\z\7\3\1\o\3\c\0\i\5\0\r\q\l\8\3\c\5\c\8\u\e\h\j\4\u\4\d\c\8\7\q\8\g\y\x\z\k\f\i\7\s\q\1\b\f\m\d\4\y\2\r\v\w\3\t\j\c\2\6\6\q\9\8\q\9\z\v\x\e\u\g\1\4\4\i\g\3\f\g\l\0\9\j\u\d\l\f\i\v\a\u\6\h\d\s\q\e\b\b\6\i\8\o\l\k\d\9\v\7\0\a\y\q\q\u\j\u\3\o\h\j\y\6\q\f\x\0\1\4\j\v\v\d\7\w\q\g\d\z\s\8\8\6\3\w\n\t\j\n\n\j\c\5\v\j\w\a\7\z\k\s\n\w\9\o\b\j\7\2\1\w\1\r\t\j\y\p\3\r\2\k\5\k\v\6\8\b\q\9\4\i\y\z\6\8\8\i\3\l\y\7\e\l\z\1\b\d\n\6\x\c\e\k\a\d\6\s\q\y\g\9\r\b\u\f\u\6\f\z\9\e\1\t\a\k\t\x\n\p\l\1\w\c\b\e\s\r\v\k\3\u\4\t\6\u\2\v\u\9\w\g\i\5\b\e\0\s\2\e\h\2\9\l\p\r\6\j\2\x\0\q\e\4\d\a\5\q\b\g\3\q\n\i\g\c\f ]] 00:08:31.077 06:47:35 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:31.077 06:47:35 -- dd/posix.sh@86 -- # gen_bytes 512 00:08:31.077 06:47:35 -- dd/common.sh@98 -- # xtrace_disable 00:08:31.077 06:47:35 -- common/autotest_common.sh@10 -- # set +x 00:08:31.077 06:47:35 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:31.077 06:47:35 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:31.077 [2024-12-13 06:47:35.465300] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:31.077 [2024-12-13 06:47:35.465412] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70529 ] 00:08:31.340 [2024-12-13 06:47:35.601497] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.340 [2024-12-13 06:47:35.630802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.340  [2024-12-13T06:47:35.859Z] Copying: 512/512 [B] (average 500 kBps) 00:08:31.340 00:08:31.340 06:47:35 -- dd/posix.sh@93 -- # [[ q1nzcu1iucdqjp6solsl14hyylh6ax5a8yh9d6kev5gkfew27454xyariv740ctsn1wnyorhl6mk16aa86hc94ikpi2g940lxv6h2hlwtjgh0offac1g9tuqnvmlziwdwpio57zgr3uhlshat6gkocwa1osswp8xvr2ttllgtjjv2ulf16v6w17ipgo8ijywnl0ol1givmqzr17az0kyk5v48qaisafpdhodtmisubcetej17whxhfv8s6a78edkmn7f7p3080op8b8r6phmd68lxt3dgam6pin6v1jxq31f98s6q8mwxpqyoyxhhcj8nuonc2fer0cj9p2ow6hbmrx2z86hsh0vvp95q1l3cgwgaynt6heas8urwui0mv1lj31xzukht3t2plw23k587f9op4q9cuji8cdw93erv47jlf28bocyn4amnm65i8gc9sfeiz3gwcer7uynmduzj9r3jx34yxafkqog12gvbbe6d9tihk8zpsagk4c37k0k == \q\1\n\z\c\u\1\i\u\c\d\q\j\p\6\s\o\l\s\l\1\4\h\y\y\l\h\6\a\x\5\a\8\y\h\9\d\6\k\e\v\5\g\k\f\e\w\2\7\4\5\4\x\y\a\r\i\v\7\4\0\c\t\s\n\1\w\n\y\o\r\h\l\6\m\k\1\6\a\a\8\6\h\c\9\4\i\k\p\i\2\g\9\4\0\l\x\v\6\h\2\h\l\w\t\j\g\h\0\o\f\f\a\c\1\g\9\t\u\q\n\v\m\l\z\i\w\d\w\p\i\o\5\7\z\g\r\3\u\h\l\s\h\a\t\6\g\k\o\c\w\a\1\o\s\s\w\p\8\x\v\r\2\t\t\l\l\g\t\j\j\v\2\u\l\f\1\6\v\6\w\1\7\i\p\g\o\8\i\j\y\w\n\l\0\o\l\1\g\i\v\m\q\z\r\1\7\a\z\0\k\y\k\5\v\4\8\q\a\i\s\a\f\p\d\h\o\d\t\m\i\s\u\b\c\e\t\e\j\1\7\w\h\x\h\f\v\8\s\6\a\7\8\e\d\k\m\n\7\f\7\p\3\0\8\0\o\p\8\b\8\r\6\p\h\m\d\6\8\l\x\t\3\d\g\a\m\6\p\i\n\6\v\1\j\x\q\3\1\f\9\8\s\6\q\8\m\w\x\p\q\y\o\y\x\h\h\c\j\8\n\u\o\n\c\2\f\e\r\0\c\j\9\p\2\o\w\6\h\b\m\r\x\2\z\8\6\h\s\h\0\v\v\p\9\5\q\1\l\3\c\g\w\g\a\y\n\t\6\h\e\a\s\8\u\r\w\u\i\0\m\v\1\l\j\3\1\x\z\u\k\h\t\3\t\2\p\l\w\2\3\k\5\8\7\f\9\o\p\4\q\9\c\u\j\i\8\c\d\w\9\3\e\r\v\4\7\j\l\f\2\8\b\o\c\y\n\4\a\m\n\m\6\5\i\8\g\c\9\s\f\e\i\z\3\g\w\c\e\r\7\u\y\n\m\d\u\z\j\9\r\3\j\x\3\4\y\x\a\f\k\q\o\g\1\2\g\v\b\b\e\6\d\9\t\i\h\k\8\z\p\s\a\g\k\4\c\3\7\k\0\k ]] 00:08:31.340 06:47:35 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:31.340 06:47:35 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:31.600 [2024-12-13 06:47:35.861865] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:31.600 [2024-12-13 06:47:35.861964] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70531 ] 00:08:31.600 [2024-12-13 06:47:36.000942] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.600 [2024-12-13 06:47:36.030155] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.600  [2024-12-13T06:47:36.378Z] Copying: 512/512 [B] (average 500 kBps) 00:08:31.859 00:08:31.859 06:47:36 -- dd/posix.sh@93 -- # [[ q1nzcu1iucdqjp6solsl14hyylh6ax5a8yh9d6kev5gkfew27454xyariv740ctsn1wnyorhl6mk16aa86hc94ikpi2g940lxv6h2hlwtjgh0offac1g9tuqnvmlziwdwpio57zgr3uhlshat6gkocwa1osswp8xvr2ttllgtjjv2ulf16v6w17ipgo8ijywnl0ol1givmqzr17az0kyk5v48qaisafpdhodtmisubcetej17whxhfv8s6a78edkmn7f7p3080op8b8r6phmd68lxt3dgam6pin6v1jxq31f98s6q8mwxpqyoyxhhcj8nuonc2fer0cj9p2ow6hbmrx2z86hsh0vvp95q1l3cgwgaynt6heas8urwui0mv1lj31xzukht3t2plw23k587f9op4q9cuji8cdw93erv47jlf28bocyn4amnm65i8gc9sfeiz3gwcer7uynmduzj9r3jx34yxafkqog12gvbbe6d9tihk8zpsagk4c37k0k == \q\1\n\z\c\u\1\i\u\c\d\q\j\p\6\s\o\l\s\l\1\4\h\y\y\l\h\6\a\x\5\a\8\y\h\9\d\6\k\e\v\5\g\k\f\e\w\2\7\4\5\4\x\y\a\r\i\v\7\4\0\c\t\s\n\1\w\n\y\o\r\h\l\6\m\k\1\6\a\a\8\6\h\c\9\4\i\k\p\i\2\g\9\4\0\l\x\v\6\h\2\h\l\w\t\j\g\h\0\o\f\f\a\c\1\g\9\t\u\q\n\v\m\l\z\i\w\d\w\p\i\o\5\7\z\g\r\3\u\h\l\s\h\a\t\6\g\k\o\c\w\a\1\o\s\s\w\p\8\x\v\r\2\t\t\l\l\g\t\j\j\v\2\u\l\f\1\6\v\6\w\1\7\i\p\g\o\8\i\j\y\w\n\l\0\o\l\1\g\i\v\m\q\z\r\1\7\a\z\0\k\y\k\5\v\4\8\q\a\i\s\a\f\p\d\h\o\d\t\m\i\s\u\b\c\e\t\e\j\1\7\w\h\x\h\f\v\8\s\6\a\7\8\e\d\k\m\n\7\f\7\p\3\0\8\0\o\p\8\b\8\r\6\p\h\m\d\6\8\l\x\t\3\d\g\a\m\6\p\i\n\6\v\1\j\x\q\3\1\f\9\8\s\6\q\8\m\w\x\p\q\y\o\y\x\h\h\c\j\8\n\u\o\n\c\2\f\e\r\0\c\j\9\p\2\o\w\6\h\b\m\r\x\2\z\8\6\h\s\h\0\v\v\p\9\5\q\1\l\3\c\g\w\g\a\y\n\t\6\h\e\a\s\8\u\r\w\u\i\0\m\v\1\l\j\3\1\x\z\u\k\h\t\3\t\2\p\l\w\2\3\k\5\8\7\f\9\o\p\4\q\9\c\u\j\i\8\c\d\w\9\3\e\r\v\4\7\j\l\f\2\8\b\o\c\y\n\4\a\m\n\m\6\5\i\8\g\c\9\s\f\e\i\z\3\g\w\c\e\r\7\u\y\n\m\d\u\z\j\9\r\3\j\x\3\4\y\x\a\f\k\q\o\g\1\2\g\v\b\b\e\6\d\9\t\i\h\k\8\z\p\s\a\g\k\4\c\3\7\k\0\k ]] 00:08:31.859 06:47:36 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:31.859 06:47:36 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:31.859 [2024-12-13 06:47:36.266138] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:31.859 [2024-12-13 06:47:36.266239] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70538 ] 00:08:32.118 [2024-12-13 06:47:36.404787] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.118 [2024-12-13 06:47:36.434023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.118  [2024-12-13T06:47:36.637Z] Copying: 512/512 [B] (average 500 kBps) 00:08:32.118 00:08:32.118 06:47:36 -- dd/posix.sh@93 -- # [[ q1nzcu1iucdqjp6solsl14hyylh6ax5a8yh9d6kev5gkfew27454xyariv740ctsn1wnyorhl6mk16aa86hc94ikpi2g940lxv6h2hlwtjgh0offac1g9tuqnvmlziwdwpio57zgr3uhlshat6gkocwa1osswp8xvr2ttllgtjjv2ulf16v6w17ipgo8ijywnl0ol1givmqzr17az0kyk5v48qaisafpdhodtmisubcetej17whxhfv8s6a78edkmn7f7p3080op8b8r6phmd68lxt3dgam6pin6v1jxq31f98s6q8mwxpqyoyxhhcj8nuonc2fer0cj9p2ow6hbmrx2z86hsh0vvp95q1l3cgwgaynt6heas8urwui0mv1lj31xzukht3t2plw23k587f9op4q9cuji8cdw93erv47jlf28bocyn4amnm65i8gc9sfeiz3gwcer7uynmduzj9r3jx34yxafkqog12gvbbe6d9tihk8zpsagk4c37k0k == \q\1\n\z\c\u\1\i\u\c\d\q\j\p\6\s\o\l\s\l\1\4\h\y\y\l\h\6\a\x\5\a\8\y\h\9\d\6\k\e\v\5\g\k\f\e\w\2\7\4\5\4\x\y\a\r\i\v\7\4\0\c\t\s\n\1\w\n\y\o\r\h\l\6\m\k\1\6\a\a\8\6\h\c\9\4\i\k\p\i\2\g\9\4\0\l\x\v\6\h\2\h\l\w\t\j\g\h\0\o\f\f\a\c\1\g\9\t\u\q\n\v\m\l\z\i\w\d\w\p\i\o\5\7\z\g\r\3\u\h\l\s\h\a\t\6\g\k\o\c\w\a\1\o\s\s\w\p\8\x\v\r\2\t\t\l\l\g\t\j\j\v\2\u\l\f\1\6\v\6\w\1\7\i\p\g\o\8\i\j\y\w\n\l\0\o\l\1\g\i\v\m\q\z\r\1\7\a\z\0\k\y\k\5\v\4\8\q\a\i\s\a\f\p\d\h\o\d\t\m\i\s\u\b\c\e\t\e\j\1\7\w\h\x\h\f\v\8\s\6\a\7\8\e\d\k\m\n\7\f\7\p\3\0\8\0\o\p\8\b\8\r\6\p\h\m\d\6\8\l\x\t\3\d\g\a\m\6\p\i\n\6\v\1\j\x\q\3\1\f\9\8\s\6\q\8\m\w\x\p\q\y\o\y\x\h\h\c\j\8\n\u\o\n\c\2\f\e\r\0\c\j\9\p\2\o\w\6\h\b\m\r\x\2\z\8\6\h\s\h\0\v\v\p\9\5\q\1\l\3\c\g\w\g\a\y\n\t\6\h\e\a\s\8\u\r\w\u\i\0\m\v\1\l\j\3\1\x\z\u\k\h\t\3\t\2\p\l\w\2\3\k\5\8\7\f\9\o\p\4\q\9\c\u\j\i\8\c\d\w\9\3\e\r\v\4\7\j\l\f\2\8\b\o\c\y\n\4\a\m\n\m\6\5\i\8\g\c\9\s\f\e\i\z\3\g\w\c\e\r\7\u\y\n\m\d\u\z\j\9\r\3\j\x\3\4\y\x\a\f\k\q\o\g\1\2\g\v\b\b\e\6\d\9\t\i\h\k\8\z\p\s\a\g\k\4\c\3\7\k\0\k ]] 00:08:32.118 06:47:36 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:32.118 06:47:36 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:32.377 [2024-12-13 06:47:36.661393] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:32.377 [2024-12-13 06:47:36.661489] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70546 ] 00:08:32.377 [2024-12-13 06:47:36.799537] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.377 [2024-12-13 06:47:36.829960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.377  [2024-12-13T06:47:37.155Z] Copying: 512/512 [B] (average 500 kBps) 00:08:32.636 00:08:32.636 06:47:37 -- dd/posix.sh@93 -- # [[ q1nzcu1iucdqjp6solsl14hyylh6ax5a8yh9d6kev5gkfew27454xyariv740ctsn1wnyorhl6mk16aa86hc94ikpi2g940lxv6h2hlwtjgh0offac1g9tuqnvmlziwdwpio57zgr3uhlshat6gkocwa1osswp8xvr2ttllgtjjv2ulf16v6w17ipgo8ijywnl0ol1givmqzr17az0kyk5v48qaisafpdhodtmisubcetej17whxhfv8s6a78edkmn7f7p3080op8b8r6phmd68lxt3dgam6pin6v1jxq31f98s6q8mwxpqyoyxhhcj8nuonc2fer0cj9p2ow6hbmrx2z86hsh0vvp95q1l3cgwgaynt6heas8urwui0mv1lj31xzukht3t2plw23k587f9op4q9cuji8cdw93erv47jlf28bocyn4amnm65i8gc9sfeiz3gwcer7uynmduzj9r3jx34yxafkqog12gvbbe6d9tihk8zpsagk4c37k0k == \q\1\n\z\c\u\1\i\u\c\d\q\j\p\6\s\o\l\s\l\1\4\h\y\y\l\h\6\a\x\5\a\8\y\h\9\d\6\k\e\v\5\g\k\f\e\w\2\7\4\5\4\x\y\a\r\i\v\7\4\0\c\t\s\n\1\w\n\y\o\r\h\l\6\m\k\1\6\a\a\8\6\h\c\9\4\i\k\p\i\2\g\9\4\0\l\x\v\6\h\2\h\l\w\t\j\g\h\0\o\f\f\a\c\1\g\9\t\u\q\n\v\m\l\z\i\w\d\w\p\i\o\5\7\z\g\r\3\u\h\l\s\h\a\t\6\g\k\o\c\w\a\1\o\s\s\w\p\8\x\v\r\2\t\t\l\l\g\t\j\j\v\2\u\l\f\1\6\v\6\w\1\7\i\p\g\o\8\i\j\y\w\n\l\0\o\l\1\g\i\v\m\q\z\r\1\7\a\z\0\k\y\k\5\v\4\8\q\a\i\s\a\f\p\d\h\o\d\t\m\i\s\u\b\c\e\t\e\j\1\7\w\h\x\h\f\v\8\s\6\a\7\8\e\d\k\m\n\7\f\7\p\3\0\8\0\o\p\8\b\8\r\6\p\h\m\d\6\8\l\x\t\3\d\g\a\m\6\p\i\n\6\v\1\j\x\q\3\1\f\9\8\s\6\q\8\m\w\x\p\q\y\o\y\x\h\h\c\j\8\n\u\o\n\c\2\f\e\r\0\c\j\9\p\2\o\w\6\h\b\m\r\x\2\z\8\6\h\s\h\0\v\v\p\9\5\q\1\l\3\c\g\w\g\a\y\n\t\6\h\e\a\s\8\u\r\w\u\i\0\m\v\1\l\j\3\1\x\z\u\k\h\t\3\t\2\p\l\w\2\3\k\5\8\7\f\9\o\p\4\q\9\c\u\j\i\8\c\d\w\9\3\e\r\v\4\7\j\l\f\2\8\b\o\c\y\n\4\a\m\n\m\6\5\i\8\g\c\9\s\f\e\i\z\3\g\w\c\e\r\7\u\y\n\m\d\u\z\j\9\r\3\j\x\3\4\y\x\a\f\k\q\o\g\1\2\g\v\b\b\e\6\d\9\t\i\h\k\8\z\p\s\a\g\k\4\c\3\7\k\0\k ]] 00:08:32.636 00:08:32.636 real 0m3.230s 00:08:32.636 user 0m1.548s 00:08:32.636 sys 0m0.698s 00:08:32.636 06:47:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:32.636 06:47:37 -- common/autotest_common.sh@10 -- # set +x 00:08:32.636 ************************************ 00:08:32.636 END TEST dd_flags_misc_forced_aio 00:08:32.636 ************************************ 00:08:32.636 06:47:37 -- dd/posix.sh@1 -- # cleanup 00:08:32.636 06:47:37 -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:32.636 06:47:37 -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:32.636 00:08:32.636 real 0m15.867s 00:08:32.636 user 0m6.650s 00:08:32.636 sys 0m3.405s 00:08:32.636 06:47:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:32.636 ************************************ 00:08:32.636 END TEST spdk_dd_posix 00:08:32.636 ************************************ 00:08:32.636 06:47:37 -- common/autotest_common.sh@10 -- # set +x 00:08:32.636 06:47:37 -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:08:32.636 06:47:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:32.636 06:47:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:32.636 06:47:37 -- common/autotest_common.sh@10 -- # set +x 00:08:32.636 ************************************ 00:08:32.636 START TEST spdk_dd_malloc 00:08:32.636 ************************************ 00:08:32.636 06:47:37 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:08:32.896 * Looking for test storage... 00:08:32.896 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:32.896 06:47:37 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:32.896 06:47:37 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:32.896 06:47:37 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:32.896 06:47:37 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:32.896 06:47:37 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:32.896 06:47:37 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:32.896 06:47:37 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:32.896 06:47:37 -- scripts/common.sh@335 -- # IFS=.-: 00:08:32.896 06:47:37 -- scripts/common.sh@335 -- # read -ra ver1 00:08:32.896 06:47:37 -- scripts/common.sh@336 -- # IFS=.-: 00:08:32.896 06:47:37 -- scripts/common.sh@336 -- # read -ra ver2 00:08:32.896 06:47:37 -- scripts/common.sh@337 -- # local 'op=<' 00:08:32.896 06:47:37 -- scripts/common.sh@339 -- # ver1_l=2 00:08:32.896 06:47:37 -- scripts/common.sh@340 -- # ver2_l=1 00:08:32.896 06:47:37 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:32.896 06:47:37 -- scripts/common.sh@343 -- # case "$op" in 00:08:32.896 06:47:37 -- scripts/common.sh@344 -- # : 1 00:08:32.896 06:47:37 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:32.896 06:47:37 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:32.896 06:47:37 -- scripts/common.sh@364 -- # decimal 1 00:08:32.896 06:47:37 -- scripts/common.sh@352 -- # local d=1 00:08:32.896 06:47:37 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:32.896 06:47:37 -- scripts/common.sh@354 -- # echo 1 00:08:32.896 06:47:37 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:32.896 06:47:37 -- scripts/common.sh@365 -- # decimal 2 00:08:32.896 06:47:37 -- scripts/common.sh@352 -- # local d=2 00:08:32.896 06:47:37 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:32.896 06:47:37 -- scripts/common.sh@354 -- # echo 2 00:08:32.896 06:47:37 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:32.896 06:47:37 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:32.896 06:47:37 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:32.896 06:47:37 -- scripts/common.sh@367 -- # return 0 00:08:32.896 06:47:37 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:32.896 06:47:37 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:32.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.896 --rc genhtml_branch_coverage=1 00:08:32.896 --rc genhtml_function_coverage=1 00:08:32.896 --rc genhtml_legend=1 00:08:32.896 --rc geninfo_all_blocks=1 00:08:32.896 --rc geninfo_unexecuted_blocks=1 00:08:32.896 00:08:32.896 ' 00:08:32.896 06:47:37 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:32.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.896 --rc genhtml_branch_coverage=1 00:08:32.896 --rc genhtml_function_coverage=1 00:08:32.896 --rc genhtml_legend=1 00:08:32.896 --rc geninfo_all_blocks=1 00:08:32.896 --rc geninfo_unexecuted_blocks=1 00:08:32.896 00:08:32.896 ' 00:08:32.896 06:47:37 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:32.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.896 --rc genhtml_branch_coverage=1 00:08:32.896 --rc genhtml_function_coverage=1 00:08:32.896 --rc genhtml_legend=1 00:08:32.896 --rc geninfo_all_blocks=1 00:08:32.896 --rc geninfo_unexecuted_blocks=1 00:08:32.896 00:08:32.896 ' 00:08:32.896 06:47:37 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:32.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.896 --rc genhtml_branch_coverage=1 00:08:32.896 --rc genhtml_function_coverage=1 00:08:32.896 --rc genhtml_legend=1 00:08:32.896 --rc geninfo_all_blocks=1 00:08:32.896 --rc geninfo_unexecuted_blocks=1 00:08:32.896 00:08:32.896 ' 00:08:32.896 06:47:37 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:32.896 06:47:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:32.896 06:47:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:32.896 06:47:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:32.896 06:47:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.897 06:47:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.897 06:47:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.897 06:47:37 -- paths/export.sh@5 -- # export PATH 00:08:32.897 06:47:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.897 06:47:37 -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:08:32.897 06:47:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:32.897 06:47:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:32.897 06:47:37 -- common/autotest_common.sh@10 -- # set +x 00:08:32.897 ************************************ 00:08:32.897 START TEST dd_malloc_copy 00:08:32.897 ************************************ 00:08:32.897 06:47:37 -- common/autotest_common.sh@1114 -- # malloc_copy 00:08:32.897 06:47:37 -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:08:32.897 06:47:37 -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:08:32.897 06:47:37 -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:08:32.897 06:47:37 -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:08:32.897 06:47:37 -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:08:32.897 06:47:37 -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:08:32.897 06:47:37 -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:08:32.897 06:47:37 -- dd/malloc.sh@28 -- # gen_conf 00:08:32.897 06:47:37 -- dd/common.sh@31 -- # xtrace_disable 00:08:32.897 06:47:37 -- common/autotest_common.sh@10 -- # set +x 00:08:32.897 [2024-12-13 06:47:37.369914] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:32.897 [2024-12-13 06:47:37.370534] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70627 ] 00:08:32.897 { 00:08:32.897 "subsystems": [ 00:08:32.897 { 00:08:32.897 "subsystem": "bdev", 00:08:32.897 "config": [ 00:08:32.897 { 00:08:32.897 "params": { 00:08:32.897 "block_size": 512, 00:08:32.897 "num_blocks": 1048576, 00:08:32.897 "name": "malloc0" 00:08:32.897 }, 00:08:32.897 "method": "bdev_malloc_create" 00:08:32.897 }, 00:08:32.897 { 00:08:32.897 "params": { 00:08:32.897 "block_size": 512, 00:08:32.897 "num_blocks": 1048576, 00:08:32.897 "name": "malloc1" 00:08:32.897 }, 00:08:32.897 "method": "bdev_malloc_create" 00:08:32.897 }, 00:08:32.897 { 00:08:32.897 "method": "bdev_wait_for_examine" 00:08:32.897 } 00:08:32.897 ] 00:08:32.897 } 00:08:32.897 ] 00:08:32.897 } 00:08:33.156 [2024-12-13 06:47:37.510848] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.156 [2024-12-13 06:47:37.541976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.535  [2024-12-13T06:47:39.991Z] Copying: 242/512 [MB] (242 MBps) [2024-12-13T06:47:39.991Z] Copying: 483/512 [MB] (241 MBps) [2024-12-13T06:47:40.249Z] Copying: 512/512 [MB] (average 241 MBps) 00:08:35.730 00:08:35.730 06:47:40 -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:08:35.730 06:47:40 -- dd/malloc.sh@33 -- # gen_conf 00:08:35.730 06:47:40 -- dd/common.sh@31 -- # xtrace_disable 00:08:35.730 06:47:40 -- common/autotest_common.sh@10 -- # set +x 00:08:35.730 [2024-12-13 06:47:40.218555] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:35.730 [2024-12-13 06:47:40.218650] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70663 ] 00:08:35.730 { 00:08:35.730 "subsystems": [ 00:08:35.730 { 00:08:35.730 "subsystem": "bdev", 00:08:35.730 "config": [ 00:08:35.730 { 00:08:35.730 "params": { 00:08:35.730 "block_size": 512, 00:08:35.730 "num_blocks": 1048576, 00:08:35.730 "name": "malloc0" 00:08:35.730 }, 00:08:35.730 "method": "bdev_malloc_create" 00:08:35.730 }, 00:08:35.730 { 00:08:35.730 "params": { 00:08:35.730 "block_size": 512, 00:08:35.730 "num_blocks": 1048576, 00:08:35.730 "name": "malloc1" 00:08:35.730 }, 00:08:35.730 "method": "bdev_malloc_create" 00:08:35.730 }, 00:08:35.730 { 00:08:35.730 "method": "bdev_wait_for_examine" 00:08:35.730 } 00:08:35.730 ] 00:08:35.730 } 00:08:35.730 ] 00:08:35.730 } 00:08:35.989 [2024-12-13 06:47:40.355458] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.989 [2024-12-13 06:47:40.387897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.366  [2024-12-13T06:47:42.822Z] Copying: 243/512 [MB] (243 MBps) [2024-12-13T06:47:42.822Z] Copying: 484/512 [MB] (240 MBps) [2024-12-13T06:47:43.082Z] Copying: 512/512 [MB] (average 242 MBps) 00:08:38.563 00:08:38.563 00:08:38.563 real 0m5.668s 00:08:38.563 user 0m5.073s 00:08:38.563 sys 0m0.449s 00:08:38.563 06:47:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:38.563 ************************************ 00:08:38.563 END TEST dd_malloc_copy 00:08:38.563 ************************************ 00:08:38.563 06:47:42 -- common/autotest_common.sh@10 -- # set +x 00:08:38.563 00:08:38.563 real 0m5.904s 00:08:38.563 user 0m5.203s 00:08:38.563 sys 0m0.553s 00:08:38.563 06:47:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:38.563 06:47:43 -- common/autotest_common.sh@10 -- # set +x 00:08:38.563 ************************************ 00:08:38.563 END TEST spdk_dd_malloc 00:08:38.563 ************************************ 00:08:38.563 06:47:43 -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 0000:00:07.0 00:08:38.563 06:47:43 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:38.563 06:47:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:38.563 06:47:43 -- common/autotest_common.sh@10 -- # set +x 00:08:38.822 ************************************ 00:08:38.822 START TEST spdk_dd_bdev_to_bdev 00:08:38.822 ************************************ 00:08:38.822 06:47:43 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 0000:00:07.0 00:08:38.822 * Looking for test storage... 00:08:38.822 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:38.822 06:47:43 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:38.822 06:47:43 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:38.822 06:47:43 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:38.822 06:47:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:38.822 06:47:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:38.822 06:47:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:38.822 06:47:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:38.822 06:47:43 -- scripts/common.sh@335 -- # IFS=.-: 00:08:38.822 06:47:43 -- scripts/common.sh@335 -- # read -ra ver1 00:08:38.822 06:47:43 -- scripts/common.sh@336 -- # IFS=.-: 00:08:38.822 06:47:43 -- scripts/common.sh@336 -- # read -ra ver2 00:08:38.822 06:47:43 -- scripts/common.sh@337 -- # local 'op=<' 00:08:38.822 06:47:43 -- scripts/common.sh@339 -- # ver1_l=2 00:08:38.822 06:47:43 -- scripts/common.sh@340 -- # ver2_l=1 00:08:38.822 06:47:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:38.822 06:47:43 -- scripts/common.sh@343 -- # case "$op" in 00:08:38.822 06:47:43 -- scripts/common.sh@344 -- # : 1 00:08:38.822 06:47:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:38.822 06:47:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:38.822 06:47:43 -- scripts/common.sh@364 -- # decimal 1 00:08:38.822 06:47:43 -- scripts/common.sh@352 -- # local d=1 00:08:38.822 06:47:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:38.822 06:47:43 -- scripts/common.sh@354 -- # echo 1 00:08:38.822 06:47:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:38.822 06:47:43 -- scripts/common.sh@365 -- # decimal 2 00:08:38.822 06:47:43 -- scripts/common.sh@352 -- # local d=2 00:08:38.822 06:47:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:38.822 06:47:43 -- scripts/common.sh@354 -- # echo 2 00:08:38.822 06:47:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:38.822 06:47:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:38.822 06:47:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:38.822 06:47:43 -- scripts/common.sh@367 -- # return 0 00:08:38.822 06:47:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:38.822 06:47:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:38.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.822 --rc genhtml_branch_coverage=1 00:08:38.822 --rc genhtml_function_coverage=1 00:08:38.822 --rc genhtml_legend=1 00:08:38.822 --rc geninfo_all_blocks=1 00:08:38.822 --rc geninfo_unexecuted_blocks=1 00:08:38.822 00:08:38.822 ' 00:08:38.822 06:47:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:38.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.822 --rc genhtml_branch_coverage=1 00:08:38.822 --rc genhtml_function_coverage=1 00:08:38.822 --rc genhtml_legend=1 00:08:38.822 --rc geninfo_all_blocks=1 00:08:38.822 --rc geninfo_unexecuted_blocks=1 00:08:38.822 00:08:38.822 ' 00:08:38.822 06:47:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:38.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.822 --rc genhtml_branch_coverage=1 00:08:38.822 --rc genhtml_function_coverage=1 00:08:38.822 --rc genhtml_legend=1 00:08:38.822 --rc geninfo_all_blocks=1 00:08:38.822 --rc geninfo_unexecuted_blocks=1 00:08:38.822 00:08:38.822 ' 00:08:38.822 06:47:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:38.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.822 --rc genhtml_branch_coverage=1 00:08:38.822 --rc genhtml_function_coverage=1 00:08:38.822 --rc genhtml_legend=1 00:08:38.822 --rc geninfo_all_blocks=1 00:08:38.822 --rc geninfo_unexecuted_blocks=1 00:08:38.822 00:08:38.822 ' 00:08:38.822 06:47:43 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:38.822 06:47:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:38.822 06:47:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:38.822 06:47:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:38.822 06:47:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.822 06:47:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.823 06:47:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.823 06:47:43 -- paths/export.sh@5 -- # export PATH 00:08:38.823 06:47:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.823 06:47:43 -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:08:38.823 06:47:43 -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:08:38.823 06:47:43 -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:08:38.823 06:47:43 -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:08:38.823 06:47:43 -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:08:38.823 06:47:43 -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:08:38.823 06:47:43 -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:06.0 00:08:38.823 06:47:43 -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:08:38.823 06:47:43 -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:08:38.823 06:47:43 -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:07.0 00:08:38.823 06:47:43 -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:08:38.823 06:47:43 -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:08:38.823 06:47:43 -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:07.0' ['trtype']='pcie') 00:08:38.823 06:47:43 -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:08:38.823 06:47:43 -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:38.823 06:47:43 -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:38.823 06:47:43 -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:08:38.823 06:47:43 -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:08:38.823 06:47:43 -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:08:38.823 06:47:43 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:08:38.823 06:47:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:38.823 06:47:43 -- common/autotest_common.sh@10 -- # set +x 00:08:38.823 ************************************ 00:08:38.823 START TEST dd_inflate_file 00:08:38.823 ************************************ 00:08:38.823 06:47:43 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:08:39.082 [2024-12-13 06:47:43.347041] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:39.082 [2024-12-13 06:47:43.347141] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70769 ] 00:08:39.082 [2024-12-13 06:47:43.477225] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.082 [2024-12-13 06:47:43.506869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.082  [2024-12-13T06:47:43.860Z] Copying: 64/64 [MB] (average 2064 MBps) 00:08:39.341 00:08:39.341 00:08:39.341 real 0m0.420s 00:08:39.341 user 0m0.186s 00:08:39.341 sys 0m0.114s 00:08:39.341 06:47:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:39.341 06:47:43 -- common/autotest_common.sh@10 -- # set +x 00:08:39.341 ************************************ 00:08:39.341 END TEST dd_inflate_file 00:08:39.341 ************************************ 00:08:39.341 06:47:43 -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:08:39.341 06:47:43 -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:08:39.341 06:47:43 -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:08:39.341 06:47:43 -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:08:39.341 06:47:43 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:08:39.341 06:47:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:39.341 06:47:43 -- dd/common.sh@31 -- # xtrace_disable 00:08:39.341 06:47:43 -- common/autotest_common.sh@10 -- # set +x 00:08:39.341 06:47:43 -- common/autotest_common.sh@10 -- # set +x 00:08:39.341 ************************************ 00:08:39.341 START TEST dd_copy_to_out_bdev 00:08:39.341 ************************************ 00:08:39.341 06:47:43 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:08:39.341 [2024-12-13 06:47:43.828120] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:39.341 [2024-12-13 06:47:43.828211] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70801 ] 00:08:39.341 { 00:08:39.341 "subsystems": [ 00:08:39.341 { 00:08:39.341 "subsystem": "bdev", 00:08:39.341 "config": [ 00:08:39.341 { 00:08:39.341 "params": { 00:08:39.341 "trtype": "pcie", 00:08:39.341 "traddr": "0000:00:06.0", 00:08:39.341 "name": "Nvme0" 00:08:39.341 }, 00:08:39.341 "method": "bdev_nvme_attach_controller" 00:08:39.341 }, 00:08:39.341 { 00:08:39.341 "params": { 00:08:39.341 "trtype": "pcie", 00:08:39.341 "traddr": "0000:00:07.0", 00:08:39.341 "name": "Nvme1" 00:08:39.341 }, 00:08:39.341 "method": "bdev_nvme_attach_controller" 00:08:39.341 }, 00:08:39.341 { 00:08:39.341 "method": "bdev_wait_for_examine" 00:08:39.341 } 00:08:39.341 ] 00:08:39.341 } 00:08:39.341 ] 00:08:39.341 } 00:08:39.600 [2024-12-13 06:47:43.965228] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.600 [2024-12-13 06:47:43.995104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.977  [2024-12-13T06:47:45.755Z] Copying: 46/64 [MB] (46 MBps) [2024-12-13T06:47:45.755Z] Copying: 64/64 [MB] (average 46 MBps) 00:08:41.236 00:08:41.236 00:08:41.236 real 0m1.899s 00:08:41.236 user 0m1.678s 00:08:41.236 sys 0m0.152s 00:08:41.236 06:47:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:41.236 06:47:45 -- common/autotest_common.sh@10 -- # set +x 00:08:41.236 ************************************ 00:08:41.236 END TEST dd_copy_to_out_bdev 00:08:41.236 ************************************ 00:08:41.236 06:47:45 -- dd/bdev_to_bdev.sh@113 -- # count=65 00:08:41.236 06:47:45 -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:08:41.236 06:47:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:41.236 06:47:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:41.236 06:47:45 -- common/autotest_common.sh@10 -- # set +x 00:08:41.236 ************************************ 00:08:41.236 START TEST dd_offset_magic 00:08:41.236 ************************************ 00:08:41.236 06:47:45 -- common/autotest_common.sh@1114 -- # offset_magic 00:08:41.236 06:47:45 -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:08:41.236 06:47:45 -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:08:41.236 06:47:45 -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:08:41.236 06:47:45 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:08:41.236 06:47:45 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:08:41.236 06:47:45 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:08:41.236 06:47:45 -- dd/common.sh@31 -- # xtrace_disable 00:08:41.236 06:47:45 -- common/autotest_common.sh@10 -- # set +x 00:08:41.495 [2024-12-13 06:47:45.783196] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:41.495 [2024-12-13 06:47:45.783287] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70845 ] 00:08:41.495 { 00:08:41.495 "subsystems": [ 00:08:41.495 { 00:08:41.495 "subsystem": "bdev", 00:08:41.495 "config": [ 00:08:41.495 { 00:08:41.495 "params": { 00:08:41.495 "trtype": "pcie", 00:08:41.495 "traddr": "0000:00:06.0", 00:08:41.495 "name": "Nvme0" 00:08:41.495 }, 00:08:41.495 "method": "bdev_nvme_attach_controller" 00:08:41.495 }, 00:08:41.495 { 00:08:41.495 "params": { 00:08:41.495 "trtype": "pcie", 00:08:41.495 "traddr": "0000:00:07.0", 00:08:41.495 "name": "Nvme1" 00:08:41.495 }, 00:08:41.495 "method": "bdev_nvme_attach_controller" 00:08:41.495 }, 00:08:41.495 { 00:08:41.495 "method": "bdev_wait_for_examine" 00:08:41.495 } 00:08:41.495 ] 00:08:41.495 } 00:08:41.495 ] 00:08:41.495 } 00:08:41.495 [2024-12-13 06:47:45.919245] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.495 [2024-12-13 06:47:45.948348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.754  [2024-12-13T06:47:46.532Z] Copying: 65/65 [MB] (average 802 MBps) 00:08:42.013 00:08:42.013 06:47:46 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:08:42.013 06:47:46 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:08:42.013 06:47:46 -- dd/common.sh@31 -- # xtrace_disable 00:08:42.013 06:47:46 -- common/autotest_common.sh@10 -- # set +x 00:08:42.013 [2024-12-13 06:47:46.395289] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:42.013 [2024-12-13 06:47:46.395433] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70854 ] 00:08:42.013 { 00:08:42.013 "subsystems": [ 00:08:42.013 { 00:08:42.013 "subsystem": "bdev", 00:08:42.013 "config": [ 00:08:42.013 { 00:08:42.013 "params": { 00:08:42.013 "trtype": "pcie", 00:08:42.013 "traddr": "0000:00:06.0", 00:08:42.013 "name": "Nvme0" 00:08:42.013 }, 00:08:42.013 "method": "bdev_nvme_attach_controller" 00:08:42.013 }, 00:08:42.013 { 00:08:42.013 "params": { 00:08:42.013 "trtype": "pcie", 00:08:42.013 "traddr": "0000:00:07.0", 00:08:42.013 "name": "Nvme1" 00:08:42.013 }, 00:08:42.013 "method": "bdev_nvme_attach_controller" 00:08:42.013 }, 00:08:42.013 { 00:08:42.013 "method": "bdev_wait_for_examine" 00:08:42.013 } 00:08:42.013 ] 00:08:42.013 } 00:08:42.013 ] 00:08:42.013 } 00:08:42.273 [2024-12-13 06:47:46.534012] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.273 [2024-12-13 06:47:46.565743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.273  [2024-12-13T06:47:47.051Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:42.532 00:08:42.532 06:47:46 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:08:42.532 06:47:46 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:08:42.532 06:47:46 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:08:42.532 06:47:46 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:08:42.532 06:47:46 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:08:42.532 06:47:46 -- dd/common.sh@31 -- # xtrace_disable 00:08:42.532 06:47:46 -- common/autotest_common.sh@10 -- # set +x 00:08:42.532 [2024-12-13 06:47:46.930220] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:42.532 [2024-12-13 06:47:46.930324] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70874 ] 00:08:42.532 { 00:08:42.532 "subsystems": [ 00:08:42.532 { 00:08:42.532 "subsystem": "bdev", 00:08:42.532 "config": [ 00:08:42.532 { 00:08:42.532 "params": { 00:08:42.532 "trtype": "pcie", 00:08:42.532 "traddr": "0000:00:06.0", 00:08:42.532 "name": "Nvme0" 00:08:42.532 }, 00:08:42.532 "method": "bdev_nvme_attach_controller" 00:08:42.532 }, 00:08:42.532 { 00:08:42.532 "params": { 00:08:42.532 "trtype": "pcie", 00:08:42.532 "traddr": "0000:00:07.0", 00:08:42.532 "name": "Nvme1" 00:08:42.532 }, 00:08:42.532 "method": "bdev_nvme_attach_controller" 00:08:42.532 }, 00:08:42.532 { 00:08:42.532 "method": "bdev_wait_for_examine" 00:08:42.532 } 00:08:42.532 ] 00:08:42.532 } 00:08:42.532 ] 00:08:42.532 } 00:08:42.791 [2024-12-13 06:47:47.067334] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.791 [2024-12-13 06:47:47.097503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.050  [2024-12-13T06:47:47.569Z] Copying: 65/65 [MB] (average 955 MBps) 00:08:43.050 00:08:43.050 06:47:47 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:08:43.050 06:47:47 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:08:43.050 06:47:47 -- dd/common.sh@31 -- # xtrace_disable 00:08:43.050 06:47:47 -- common/autotest_common.sh@10 -- # set +x 00:08:43.050 [2024-12-13 06:47:47.552463] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:43.050 [2024-12-13 06:47:47.552565] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70893 ] 00:08:43.050 { 00:08:43.050 "subsystems": [ 00:08:43.050 { 00:08:43.050 "subsystem": "bdev", 00:08:43.050 "config": [ 00:08:43.050 { 00:08:43.050 "params": { 00:08:43.050 "trtype": "pcie", 00:08:43.050 "traddr": "0000:00:06.0", 00:08:43.050 "name": "Nvme0" 00:08:43.050 }, 00:08:43.050 "method": "bdev_nvme_attach_controller" 00:08:43.050 }, 00:08:43.050 { 00:08:43.050 "params": { 00:08:43.050 "trtype": "pcie", 00:08:43.050 "traddr": "0000:00:07.0", 00:08:43.050 "name": "Nvme1" 00:08:43.050 }, 00:08:43.050 "method": "bdev_nvme_attach_controller" 00:08:43.050 }, 00:08:43.050 { 00:08:43.050 "method": "bdev_wait_for_examine" 00:08:43.050 } 00:08:43.050 ] 00:08:43.050 } 00:08:43.050 ] 00:08:43.050 } 00:08:43.309 [2024-12-13 06:47:47.689630] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.309 [2024-12-13 06:47:47.719128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.568  [2024-12-13T06:47:48.087Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:43.568 00:08:43.568 06:47:48 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:08:43.568 06:47:48 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:08:43.568 00:08:43.568 real 0m2.298s 00:08:43.568 user 0m1.673s 00:08:43.568 sys 0m0.430s 00:08:43.568 06:47:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:43.568 06:47:48 -- common/autotest_common.sh@10 -- # set +x 00:08:43.568 ************************************ 00:08:43.568 END TEST dd_offset_magic 00:08:43.568 ************************************ 00:08:43.568 06:47:48 -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:08:43.568 06:47:48 -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:08:43.568 06:47:48 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:43.568 06:47:48 -- dd/common.sh@11 -- # local nvme_ref= 00:08:43.568 06:47:48 -- dd/common.sh@12 -- # local size=4194330 00:08:43.568 06:47:48 -- dd/common.sh@14 -- # local bs=1048576 00:08:43.568 06:47:48 -- dd/common.sh@15 -- # local count=5 00:08:43.568 06:47:48 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:08:43.568 06:47:48 -- dd/common.sh@18 -- # gen_conf 00:08:43.568 06:47:48 -- dd/common.sh@31 -- # xtrace_disable 00:08:43.568 06:47:48 -- common/autotest_common.sh@10 -- # set +x 00:08:43.827 [2024-12-13 06:47:48.128273] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:43.827 [2024-12-13 06:47:48.128411] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70919 ] 00:08:43.827 { 00:08:43.827 "subsystems": [ 00:08:43.827 { 00:08:43.827 "subsystem": "bdev", 00:08:43.827 "config": [ 00:08:43.827 { 00:08:43.827 "params": { 00:08:43.827 "trtype": "pcie", 00:08:43.827 "traddr": "0000:00:06.0", 00:08:43.827 "name": "Nvme0" 00:08:43.827 }, 00:08:43.827 "method": "bdev_nvme_attach_controller" 00:08:43.827 }, 00:08:43.827 { 00:08:43.827 "params": { 00:08:43.827 "trtype": "pcie", 00:08:43.827 "traddr": "0000:00:07.0", 00:08:43.827 "name": "Nvme1" 00:08:43.827 }, 00:08:43.827 "method": "bdev_nvme_attach_controller" 00:08:43.827 }, 00:08:43.827 { 00:08:43.827 "method": "bdev_wait_for_examine" 00:08:43.827 } 00:08:43.827 ] 00:08:43.827 } 00:08:43.827 ] 00:08:43.827 } 00:08:43.827 [2024-12-13 06:47:48.262379] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.827 [2024-12-13 06:47:48.291111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.086  [2024-12-13T06:47:48.864Z] Copying: 5120/5120 [kB] (average 1666 MBps) 00:08:44.345 00:08:44.345 06:47:48 -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:08:44.345 06:47:48 -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:08:44.345 06:47:48 -- dd/common.sh@11 -- # local nvme_ref= 00:08:44.345 06:47:48 -- dd/common.sh@12 -- # local size=4194330 00:08:44.345 06:47:48 -- dd/common.sh@14 -- # local bs=1048576 00:08:44.345 06:47:48 -- dd/common.sh@15 -- # local count=5 00:08:44.345 06:47:48 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:08:44.345 06:47:48 -- dd/common.sh@18 -- # gen_conf 00:08:44.345 06:47:48 -- dd/common.sh@31 -- # xtrace_disable 00:08:44.345 06:47:48 -- common/autotest_common.sh@10 -- # set +x 00:08:44.345 [2024-12-13 06:47:48.661709] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:44.345 [2024-12-13 06:47:48.661854] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70939 ] 00:08:44.345 { 00:08:44.345 "subsystems": [ 00:08:44.345 { 00:08:44.345 "subsystem": "bdev", 00:08:44.345 "config": [ 00:08:44.345 { 00:08:44.345 "params": { 00:08:44.345 "trtype": "pcie", 00:08:44.345 "traddr": "0000:00:06.0", 00:08:44.345 "name": "Nvme0" 00:08:44.345 }, 00:08:44.345 "method": "bdev_nvme_attach_controller" 00:08:44.345 }, 00:08:44.345 { 00:08:44.345 "params": { 00:08:44.345 "trtype": "pcie", 00:08:44.345 "traddr": "0000:00:07.0", 00:08:44.345 "name": "Nvme1" 00:08:44.345 }, 00:08:44.345 "method": "bdev_nvme_attach_controller" 00:08:44.345 }, 00:08:44.345 { 00:08:44.345 "method": "bdev_wait_for_examine" 00:08:44.345 } 00:08:44.345 ] 00:08:44.345 } 00:08:44.345 ] 00:08:44.345 } 00:08:44.345 [2024-12-13 06:47:48.790129] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.345 [2024-12-13 06:47:48.819894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.603  [2024-12-13T06:47:49.122Z] Copying: 5120/5120 [kB] (average 714 MBps) 00:08:44.603 00:08:44.862 06:47:49 -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:08:44.862 ************************************ 00:08:44.862 END TEST spdk_dd_bdev_to_bdev 00:08:44.862 ************************************ 00:08:44.862 00:08:44.862 real 0m6.059s 00:08:44.862 user 0m4.437s 00:08:44.862 sys 0m1.102s 00:08:44.862 06:47:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:44.862 06:47:49 -- common/autotest_common.sh@10 -- # set +x 00:08:44.862 06:47:49 -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:08:44.862 06:47:49 -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:08:44.862 06:47:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:44.862 06:47:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:44.862 06:47:49 -- common/autotest_common.sh@10 -- # set +x 00:08:44.862 ************************************ 00:08:44.862 START TEST spdk_dd_uring 00:08:44.862 ************************************ 00:08:44.862 06:47:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:08:44.862 * Looking for test storage... 00:08:44.862 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:44.862 06:47:49 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:44.863 06:47:49 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:44.863 06:47:49 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:44.863 06:47:49 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:44.863 06:47:49 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:44.863 06:47:49 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:44.863 06:47:49 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:44.863 06:47:49 -- scripts/common.sh@335 -- # IFS=.-: 00:08:44.863 06:47:49 -- scripts/common.sh@335 -- # read -ra ver1 00:08:44.863 06:47:49 -- scripts/common.sh@336 -- # IFS=.-: 00:08:44.863 06:47:49 -- scripts/common.sh@336 -- # read -ra ver2 00:08:44.863 06:47:49 -- scripts/common.sh@337 -- # local 'op=<' 00:08:44.863 06:47:49 -- scripts/common.sh@339 -- # ver1_l=2 00:08:44.863 06:47:49 -- scripts/common.sh@340 -- # ver2_l=1 00:08:44.863 06:47:49 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:44.863 06:47:49 -- scripts/common.sh@343 -- # case "$op" in 00:08:44.863 06:47:49 -- scripts/common.sh@344 -- # : 1 00:08:44.863 06:47:49 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:44.863 06:47:49 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:44.863 06:47:49 -- scripts/common.sh@364 -- # decimal 1 00:08:44.863 06:47:49 -- scripts/common.sh@352 -- # local d=1 00:08:44.863 06:47:49 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:44.863 06:47:49 -- scripts/common.sh@354 -- # echo 1 00:08:44.863 06:47:49 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:44.863 06:47:49 -- scripts/common.sh@365 -- # decimal 2 00:08:44.863 06:47:49 -- scripts/common.sh@352 -- # local d=2 00:08:44.863 06:47:49 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:44.863 06:47:49 -- scripts/common.sh@354 -- # echo 2 00:08:44.863 06:47:49 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:44.863 06:47:49 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:44.863 06:47:49 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:44.863 06:47:49 -- scripts/common.sh@367 -- # return 0 00:08:44.863 06:47:49 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:44.863 06:47:49 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:44.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.863 --rc genhtml_branch_coverage=1 00:08:44.863 --rc genhtml_function_coverage=1 00:08:44.863 --rc genhtml_legend=1 00:08:44.863 --rc geninfo_all_blocks=1 00:08:44.863 --rc geninfo_unexecuted_blocks=1 00:08:44.863 00:08:44.863 ' 00:08:44.863 06:47:49 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:44.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.863 --rc genhtml_branch_coverage=1 00:08:44.863 --rc genhtml_function_coverage=1 00:08:44.863 --rc genhtml_legend=1 00:08:44.863 --rc geninfo_all_blocks=1 00:08:44.863 --rc geninfo_unexecuted_blocks=1 00:08:44.863 00:08:44.863 ' 00:08:44.863 06:47:49 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:44.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.863 --rc genhtml_branch_coverage=1 00:08:44.863 --rc genhtml_function_coverage=1 00:08:44.863 --rc genhtml_legend=1 00:08:44.863 --rc geninfo_all_blocks=1 00:08:44.863 --rc geninfo_unexecuted_blocks=1 00:08:44.863 00:08:44.863 ' 00:08:44.863 06:47:49 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:44.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.863 --rc genhtml_branch_coverage=1 00:08:44.863 --rc genhtml_function_coverage=1 00:08:44.863 --rc genhtml_legend=1 00:08:44.863 --rc geninfo_all_blocks=1 00:08:44.863 --rc geninfo_unexecuted_blocks=1 00:08:44.863 00:08:44.863 ' 00:08:44.863 06:47:49 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:44.863 06:47:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:44.863 06:47:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:44.863 06:47:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:44.863 06:47:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.863 06:47:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.863 06:47:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.863 06:47:49 -- paths/export.sh@5 -- # export PATH 00:08:44.863 06:47:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.863 06:47:49 -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:08:44.863 06:47:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:44.863 06:47:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:44.863 06:47:49 -- common/autotest_common.sh@10 -- # set +x 00:08:45.123 ************************************ 00:08:45.123 START TEST dd_uring_copy 00:08:45.123 ************************************ 00:08:45.123 06:47:49 -- common/autotest_common.sh@1114 -- # uring_zram_copy 00:08:45.123 06:47:49 -- dd/uring.sh@15 -- # local zram_dev_id 00:08:45.123 06:47:49 -- dd/uring.sh@16 -- # local magic 00:08:45.123 06:47:49 -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:08:45.123 06:47:49 -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:45.123 06:47:49 -- dd/uring.sh@19 -- # local verify_magic 00:08:45.123 06:47:49 -- dd/uring.sh@21 -- # init_zram 00:08:45.123 06:47:49 -- dd/common.sh@163 -- # [[ -e /sys/class/zram-control ]] 00:08:45.123 06:47:49 -- dd/common.sh@164 -- # return 00:08:45.123 06:47:49 -- dd/uring.sh@22 -- # create_zram_dev 00:08:45.123 06:47:49 -- dd/common.sh@168 -- # cat /sys/class/zram-control/hot_add 00:08:45.123 06:47:49 -- dd/uring.sh@22 -- # zram_dev_id=1 00:08:45.123 06:47:49 -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:08:45.123 06:47:49 -- dd/common.sh@181 -- # local id=1 00:08:45.123 06:47:49 -- dd/common.sh@182 -- # local size=512M 00:08:45.123 06:47:49 -- dd/common.sh@184 -- # [[ -e /sys/block/zram1 ]] 00:08:45.123 06:47:49 -- dd/common.sh@186 -- # echo 512M 00:08:45.123 06:47:49 -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:08:45.123 06:47:49 -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:08:45.123 06:47:49 -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:08:45.123 06:47:49 -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:08:45.123 06:47:49 -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:08:45.123 06:47:49 -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:08:45.123 06:47:49 -- dd/uring.sh@41 -- # gen_bytes 1024 00:08:45.123 06:47:49 -- dd/common.sh@98 -- # xtrace_disable 00:08:45.123 06:47:49 -- common/autotest_common.sh@10 -- # set +x 00:08:45.123 06:47:49 -- dd/uring.sh@41 -- # magic=v32ty7y8vzw966sl7tnkfbr3mcauje1pbq1dpnexipo6nfx10xis6q9m7iwfxgmpam9vq5hxejl7zlzbl1uwzwytzt75n2vg3ij4vlrs29dl7zq6jnf567q74hbg81j9ksls6v1relqaupzpjr4m9ycwsfq739ntnz10xz3enhuhipudt8566v6pr4mzbajkwrere9evwcvjg9w4ip1ee1iw5o1v0f4sjqlfneioc2v3pxbct13inxxltyfnmhw077qnb03ueqrpr0s5q9uyjxk9q6qtsa048fht5dyxfa1jbe8dzeklsll1lmxxmuqp6jp2l5v3nx6bym59xxhdqvxyibfj7trz0mi4qokieyogz5wfbnbcpb85tbmx5w3yupl0h6le0guebnqtmy80f2fr7z03dzu1ri1j0p6ii12mg0vyz2y1j03viu4bngw4gkodbhy61bi5srwsnnag4yp6ttpiw6j4z6lljk4pbtik9sl7sdfnr2bjw7rpa9yjpqjt7xbt9acaqrus2tc4oazysrxgtm5n4e1f7busbyk20i9oa61e9xp8afal181aqvy4669c2d8osisn2aejgf1e0i89x3tsgh8m89begm3whvml2mm9cie17aef34ykv8ngxuql68rp70ib54l8yrsvego6ab9efevk85ka8wioh55ijf5w5qaz8a3o5wqcui8t11qohmg9jltbpyi55a9zyr54pv3tnu5pm8213qzccjksmuu6mhuzy0rcsai6tnpy42aigr2nhdbf18e9tqd1u1h9yn1cbeghphl0fkhdyl5ou6bef7g91hrowgtag3nefa7jprcxbatztnl0e2jzn92tpahvkhefooz1v2rpvc4xty8cy62cecrkdejtuxfruf1non54aqe414ul8p1st4vjude8qrfd05xcd85dsywbdzhacg8xax2qlnn1nhz14it2v8oue0sb1r4z5yrqb5oo7xyz9e0g6etsucsyxdfqipq0b2zlqwyba5gg 00:08:45.123 06:47:49 -- dd/uring.sh@42 -- # echo v32ty7y8vzw966sl7tnkfbr3mcauje1pbq1dpnexipo6nfx10xis6q9m7iwfxgmpam9vq5hxejl7zlzbl1uwzwytzt75n2vg3ij4vlrs29dl7zq6jnf567q74hbg81j9ksls6v1relqaupzpjr4m9ycwsfq739ntnz10xz3enhuhipudt8566v6pr4mzbajkwrere9evwcvjg9w4ip1ee1iw5o1v0f4sjqlfneioc2v3pxbct13inxxltyfnmhw077qnb03ueqrpr0s5q9uyjxk9q6qtsa048fht5dyxfa1jbe8dzeklsll1lmxxmuqp6jp2l5v3nx6bym59xxhdqvxyibfj7trz0mi4qokieyogz5wfbnbcpb85tbmx5w3yupl0h6le0guebnqtmy80f2fr7z03dzu1ri1j0p6ii12mg0vyz2y1j03viu4bngw4gkodbhy61bi5srwsnnag4yp6ttpiw6j4z6lljk4pbtik9sl7sdfnr2bjw7rpa9yjpqjt7xbt9acaqrus2tc4oazysrxgtm5n4e1f7busbyk20i9oa61e9xp8afal181aqvy4669c2d8osisn2aejgf1e0i89x3tsgh8m89begm3whvml2mm9cie17aef34ykv8ngxuql68rp70ib54l8yrsvego6ab9efevk85ka8wioh55ijf5w5qaz8a3o5wqcui8t11qohmg9jltbpyi55a9zyr54pv3tnu5pm8213qzccjksmuu6mhuzy0rcsai6tnpy42aigr2nhdbf18e9tqd1u1h9yn1cbeghphl0fkhdyl5ou6bef7g91hrowgtag3nefa7jprcxbatztnl0e2jzn92tpahvkhefooz1v2rpvc4xty8cy62cecrkdejtuxfruf1non54aqe414ul8p1st4vjude8qrfd05xcd85dsywbdzhacg8xax2qlnn1nhz14it2v8oue0sb1r4z5yrqb5oo7xyz9e0g6etsucsyxdfqipq0b2zlqwyba5gg 00:08:45.123 06:47:49 -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:08:45.123 [2024-12-13 06:47:49.468424] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:45.123 [2024-12-13 06:47:49.468518] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71004 ] 00:08:45.123 [2024-12-13 06:47:49.604968] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.123 [2024-12-13 06:47:49.640330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.690  [2024-12-13T06:47:50.468Z] Copying: 511/511 [MB] (average 1954 MBps) 00:08:45.949 00:08:45.949 06:47:50 -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:08:45.950 06:47:50 -- dd/uring.sh@54 -- # gen_conf 00:08:45.950 06:47:50 -- dd/common.sh@31 -- # xtrace_disable 00:08:45.950 06:47:50 -- common/autotest_common.sh@10 -- # set +x 00:08:45.950 [2024-12-13 06:47:50.297413] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:45.950 [2024-12-13 06:47:50.297511] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71018 ] 00:08:45.950 { 00:08:45.950 "subsystems": [ 00:08:45.950 { 00:08:45.950 "subsystem": "bdev", 00:08:45.950 "config": [ 00:08:45.950 { 00:08:45.950 "params": { 00:08:45.950 "block_size": 512, 00:08:45.950 "num_blocks": 1048576, 00:08:45.950 "name": "malloc0" 00:08:45.950 }, 00:08:45.950 "method": "bdev_malloc_create" 00:08:45.950 }, 00:08:45.950 { 00:08:45.950 "params": { 00:08:45.950 "filename": "/dev/zram1", 00:08:45.950 "name": "uring0" 00:08:45.950 }, 00:08:45.950 "method": "bdev_uring_create" 00:08:45.950 }, 00:08:45.950 { 00:08:45.950 "method": "bdev_wait_for_examine" 00:08:45.950 } 00:08:45.950 ] 00:08:45.950 } 00:08:45.950 ] 00:08:45.950 } 00:08:45.950 [2024-12-13 06:47:50.434971] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.950 [2024-12-13 06:47:50.465008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.327  [2024-12-13T06:47:52.783Z] Copying: 242/512 [MB] (242 MBps) [2024-12-13T06:47:52.783Z] Copying: 487/512 [MB] (244 MBps) [2024-12-13T06:47:53.042Z] Copying: 512/512 [MB] (average 243 MBps) 00:08:48.523 00:08:48.523 06:47:52 -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:08:48.523 06:47:52 -- dd/uring.sh@60 -- # gen_conf 00:08:48.523 06:47:52 -- dd/common.sh@31 -- # xtrace_disable 00:08:48.523 06:47:52 -- common/autotest_common.sh@10 -- # set +x 00:08:48.523 [2024-12-13 06:47:52.992102] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:48.523 [2024-12-13 06:47:52.992186] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71061 ] 00:08:48.523 { 00:08:48.523 "subsystems": [ 00:08:48.523 { 00:08:48.523 "subsystem": "bdev", 00:08:48.523 "config": [ 00:08:48.523 { 00:08:48.523 "params": { 00:08:48.523 "block_size": 512, 00:08:48.523 "num_blocks": 1048576, 00:08:48.523 "name": "malloc0" 00:08:48.523 }, 00:08:48.523 "method": "bdev_malloc_create" 00:08:48.523 }, 00:08:48.523 { 00:08:48.523 "params": { 00:08:48.523 "filename": "/dev/zram1", 00:08:48.523 "name": "uring0" 00:08:48.523 }, 00:08:48.523 "method": "bdev_uring_create" 00:08:48.523 }, 00:08:48.523 { 00:08:48.523 "method": "bdev_wait_for_examine" 00:08:48.523 } 00:08:48.523 ] 00:08:48.523 } 00:08:48.523 ] 00:08:48.523 } 00:08:48.782 [2024-12-13 06:47:53.127514] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.782 [2024-12-13 06:47:53.157159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.202  [2024-12-13T06:47:55.295Z] Copying: 159/512 [MB] (159 MBps) [2024-12-13T06:47:56.674Z] Copying: 313/512 [MB] (153 MBps) [2024-12-13T06:47:56.674Z] Copying: 466/512 [MB] (153 MBps) [2024-12-13T06:47:56.933Z] Copying: 512/512 [MB] (average 153 MBps) 00:08:52.414 00:08:52.414 06:47:56 -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:08:52.414 06:47:56 -- dd/uring.sh@66 -- # [[ v32ty7y8vzw966sl7tnkfbr3mcauje1pbq1dpnexipo6nfx10xis6q9m7iwfxgmpam9vq5hxejl7zlzbl1uwzwytzt75n2vg3ij4vlrs29dl7zq6jnf567q74hbg81j9ksls6v1relqaupzpjr4m9ycwsfq739ntnz10xz3enhuhipudt8566v6pr4mzbajkwrere9evwcvjg9w4ip1ee1iw5o1v0f4sjqlfneioc2v3pxbct13inxxltyfnmhw077qnb03ueqrpr0s5q9uyjxk9q6qtsa048fht5dyxfa1jbe8dzeklsll1lmxxmuqp6jp2l5v3nx6bym59xxhdqvxyibfj7trz0mi4qokieyogz5wfbnbcpb85tbmx5w3yupl0h6le0guebnqtmy80f2fr7z03dzu1ri1j0p6ii12mg0vyz2y1j03viu4bngw4gkodbhy61bi5srwsnnag4yp6ttpiw6j4z6lljk4pbtik9sl7sdfnr2bjw7rpa9yjpqjt7xbt9acaqrus2tc4oazysrxgtm5n4e1f7busbyk20i9oa61e9xp8afal181aqvy4669c2d8osisn2aejgf1e0i89x3tsgh8m89begm3whvml2mm9cie17aef34ykv8ngxuql68rp70ib54l8yrsvego6ab9efevk85ka8wioh55ijf5w5qaz8a3o5wqcui8t11qohmg9jltbpyi55a9zyr54pv3tnu5pm8213qzccjksmuu6mhuzy0rcsai6tnpy42aigr2nhdbf18e9tqd1u1h9yn1cbeghphl0fkhdyl5ou6bef7g91hrowgtag3nefa7jprcxbatztnl0e2jzn92tpahvkhefooz1v2rpvc4xty8cy62cecrkdejtuxfruf1non54aqe414ul8p1st4vjude8qrfd05xcd85dsywbdzhacg8xax2qlnn1nhz14it2v8oue0sb1r4z5yrqb5oo7xyz9e0g6etsucsyxdfqipq0b2zlqwyba5gg == \v\3\2\t\y\7\y\8\v\z\w\9\6\6\s\l\7\t\n\k\f\b\r\3\m\c\a\u\j\e\1\p\b\q\1\d\p\n\e\x\i\p\o\6\n\f\x\1\0\x\i\s\6\q\9\m\7\i\w\f\x\g\m\p\a\m\9\v\q\5\h\x\e\j\l\7\z\l\z\b\l\1\u\w\z\w\y\t\z\t\7\5\n\2\v\g\3\i\j\4\v\l\r\s\2\9\d\l\7\z\q\6\j\n\f\5\6\7\q\7\4\h\b\g\8\1\j\9\k\s\l\s\6\v\1\r\e\l\q\a\u\p\z\p\j\r\4\m\9\y\c\w\s\f\q\7\3\9\n\t\n\z\1\0\x\z\3\e\n\h\u\h\i\p\u\d\t\8\5\6\6\v\6\p\r\4\m\z\b\a\j\k\w\r\e\r\e\9\e\v\w\c\v\j\g\9\w\4\i\p\1\e\e\1\i\w\5\o\1\v\0\f\4\s\j\q\l\f\n\e\i\o\c\2\v\3\p\x\b\c\t\1\3\i\n\x\x\l\t\y\f\n\m\h\w\0\7\7\q\n\b\0\3\u\e\q\r\p\r\0\s\5\q\9\u\y\j\x\k\9\q\6\q\t\s\a\0\4\8\f\h\t\5\d\y\x\f\a\1\j\b\e\8\d\z\e\k\l\s\l\l\1\l\m\x\x\m\u\q\p\6\j\p\2\l\5\v\3\n\x\6\b\y\m\5\9\x\x\h\d\q\v\x\y\i\b\f\j\7\t\r\z\0\m\i\4\q\o\k\i\e\y\o\g\z\5\w\f\b\n\b\c\p\b\8\5\t\b\m\x\5\w\3\y\u\p\l\0\h\6\l\e\0\g\u\e\b\n\q\t\m\y\8\0\f\2\f\r\7\z\0\3\d\z\u\1\r\i\1\j\0\p\6\i\i\1\2\m\g\0\v\y\z\2\y\1\j\0\3\v\i\u\4\b\n\g\w\4\g\k\o\d\b\h\y\6\1\b\i\5\s\r\w\s\n\n\a\g\4\y\p\6\t\t\p\i\w\6\j\4\z\6\l\l\j\k\4\p\b\t\i\k\9\s\l\7\s\d\f\n\r\2\b\j\w\7\r\p\a\9\y\j\p\q\j\t\7\x\b\t\9\a\c\a\q\r\u\s\2\t\c\4\o\a\z\y\s\r\x\g\t\m\5\n\4\e\1\f\7\b\u\s\b\y\k\2\0\i\9\o\a\6\1\e\9\x\p\8\a\f\a\l\1\8\1\a\q\v\y\4\6\6\9\c\2\d\8\o\s\i\s\n\2\a\e\j\g\f\1\e\0\i\8\9\x\3\t\s\g\h\8\m\8\9\b\e\g\m\3\w\h\v\m\l\2\m\m\9\c\i\e\1\7\a\e\f\3\4\y\k\v\8\n\g\x\u\q\l\6\8\r\p\7\0\i\b\5\4\l\8\y\r\s\v\e\g\o\6\a\b\9\e\f\e\v\k\8\5\k\a\8\w\i\o\h\5\5\i\j\f\5\w\5\q\a\z\8\a\3\o\5\w\q\c\u\i\8\t\1\1\q\o\h\m\g\9\j\l\t\b\p\y\i\5\5\a\9\z\y\r\5\4\p\v\3\t\n\u\5\p\m\8\2\1\3\q\z\c\c\j\k\s\m\u\u\6\m\h\u\z\y\0\r\c\s\a\i\6\t\n\p\y\4\2\a\i\g\r\2\n\h\d\b\f\1\8\e\9\t\q\d\1\u\1\h\9\y\n\1\c\b\e\g\h\p\h\l\0\f\k\h\d\y\l\5\o\u\6\b\e\f\7\g\9\1\h\r\o\w\g\t\a\g\3\n\e\f\a\7\j\p\r\c\x\b\a\t\z\t\n\l\0\e\2\j\z\n\9\2\t\p\a\h\v\k\h\e\f\o\o\z\1\v\2\r\p\v\c\4\x\t\y\8\c\y\6\2\c\e\c\r\k\d\e\j\t\u\x\f\r\u\f\1\n\o\n\5\4\a\q\e\4\1\4\u\l\8\p\1\s\t\4\v\j\u\d\e\8\q\r\f\d\0\5\x\c\d\8\5\d\s\y\w\b\d\z\h\a\c\g\8\x\a\x\2\q\l\n\n\1\n\h\z\1\4\i\t\2\v\8\o\u\e\0\s\b\1\r\4\z\5\y\r\q\b\5\o\o\7\x\y\z\9\e\0\g\6\e\t\s\u\c\s\y\x\d\f\q\i\p\q\0\b\2\z\l\q\w\y\b\a\5\g\g ]] 00:08:52.414 06:47:56 -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:08:52.415 06:47:56 -- dd/uring.sh@69 -- # [[ v32ty7y8vzw966sl7tnkfbr3mcauje1pbq1dpnexipo6nfx10xis6q9m7iwfxgmpam9vq5hxejl7zlzbl1uwzwytzt75n2vg3ij4vlrs29dl7zq6jnf567q74hbg81j9ksls6v1relqaupzpjr4m9ycwsfq739ntnz10xz3enhuhipudt8566v6pr4mzbajkwrere9evwcvjg9w4ip1ee1iw5o1v0f4sjqlfneioc2v3pxbct13inxxltyfnmhw077qnb03ueqrpr0s5q9uyjxk9q6qtsa048fht5dyxfa1jbe8dzeklsll1lmxxmuqp6jp2l5v3nx6bym59xxhdqvxyibfj7trz0mi4qokieyogz5wfbnbcpb85tbmx5w3yupl0h6le0guebnqtmy80f2fr7z03dzu1ri1j0p6ii12mg0vyz2y1j03viu4bngw4gkodbhy61bi5srwsnnag4yp6ttpiw6j4z6lljk4pbtik9sl7sdfnr2bjw7rpa9yjpqjt7xbt9acaqrus2tc4oazysrxgtm5n4e1f7busbyk20i9oa61e9xp8afal181aqvy4669c2d8osisn2aejgf1e0i89x3tsgh8m89begm3whvml2mm9cie17aef34ykv8ngxuql68rp70ib54l8yrsvego6ab9efevk85ka8wioh55ijf5w5qaz8a3o5wqcui8t11qohmg9jltbpyi55a9zyr54pv3tnu5pm8213qzccjksmuu6mhuzy0rcsai6tnpy42aigr2nhdbf18e9tqd1u1h9yn1cbeghphl0fkhdyl5ou6bef7g91hrowgtag3nefa7jprcxbatztnl0e2jzn92tpahvkhefooz1v2rpvc4xty8cy62cecrkdejtuxfruf1non54aqe414ul8p1st4vjude8qrfd05xcd85dsywbdzhacg8xax2qlnn1nhz14it2v8oue0sb1r4z5yrqb5oo7xyz9e0g6etsucsyxdfqipq0b2zlqwyba5gg == \v\3\2\t\y\7\y\8\v\z\w\9\6\6\s\l\7\t\n\k\f\b\r\3\m\c\a\u\j\e\1\p\b\q\1\d\p\n\e\x\i\p\o\6\n\f\x\1\0\x\i\s\6\q\9\m\7\i\w\f\x\g\m\p\a\m\9\v\q\5\h\x\e\j\l\7\z\l\z\b\l\1\u\w\z\w\y\t\z\t\7\5\n\2\v\g\3\i\j\4\v\l\r\s\2\9\d\l\7\z\q\6\j\n\f\5\6\7\q\7\4\h\b\g\8\1\j\9\k\s\l\s\6\v\1\r\e\l\q\a\u\p\z\p\j\r\4\m\9\y\c\w\s\f\q\7\3\9\n\t\n\z\1\0\x\z\3\e\n\h\u\h\i\p\u\d\t\8\5\6\6\v\6\p\r\4\m\z\b\a\j\k\w\r\e\r\e\9\e\v\w\c\v\j\g\9\w\4\i\p\1\e\e\1\i\w\5\o\1\v\0\f\4\s\j\q\l\f\n\e\i\o\c\2\v\3\p\x\b\c\t\1\3\i\n\x\x\l\t\y\f\n\m\h\w\0\7\7\q\n\b\0\3\u\e\q\r\p\r\0\s\5\q\9\u\y\j\x\k\9\q\6\q\t\s\a\0\4\8\f\h\t\5\d\y\x\f\a\1\j\b\e\8\d\z\e\k\l\s\l\l\1\l\m\x\x\m\u\q\p\6\j\p\2\l\5\v\3\n\x\6\b\y\m\5\9\x\x\h\d\q\v\x\y\i\b\f\j\7\t\r\z\0\m\i\4\q\o\k\i\e\y\o\g\z\5\w\f\b\n\b\c\p\b\8\5\t\b\m\x\5\w\3\y\u\p\l\0\h\6\l\e\0\g\u\e\b\n\q\t\m\y\8\0\f\2\f\r\7\z\0\3\d\z\u\1\r\i\1\j\0\p\6\i\i\1\2\m\g\0\v\y\z\2\y\1\j\0\3\v\i\u\4\b\n\g\w\4\g\k\o\d\b\h\y\6\1\b\i\5\s\r\w\s\n\n\a\g\4\y\p\6\t\t\p\i\w\6\j\4\z\6\l\l\j\k\4\p\b\t\i\k\9\s\l\7\s\d\f\n\r\2\b\j\w\7\r\p\a\9\y\j\p\q\j\t\7\x\b\t\9\a\c\a\q\r\u\s\2\t\c\4\o\a\z\y\s\r\x\g\t\m\5\n\4\e\1\f\7\b\u\s\b\y\k\2\0\i\9\o\a\6\1\e\9\x\p\8\a\f\a\l\1\8\1\a\q\v\y\4\6\6\9\c\2\d\8\o\s\i\s\n\2\a\e\j\g\f\1\e\0\i\8\9\x\3\t\s\g\h\8\m\8\9\b\e\g\m\3\w\h\v\m\l\2\m\m\9\c\i\e\1\7\a\e\f\3\4\y\k\v\8\n\g\x\u\q\l\6\8\r\p\7\0\i\b\5\4\l\8\y\r\s\v\e\g\o\6\a\b\9\e\f\e\v\k\8\5\k\a\8\w\i\o\h\5\5\i\j\f\5\w\5\q\a\z\8\a\3\o\5\w\q\c\u\i\8\t\1\1\q\o\h\m\g\9\j\l\t\b\p\y\i\5\5\a\9\z\y\r\5\4\p\v\3\t\n\u\5\p\m\8\2\1\3\q\z\c\c\j\k\s\m\u\u\6\m\h\u\z\y\0\r\c\s\a\i\6\t\n\p\y\4\2\a\i\g\r\2\n\h\d\b\f\1\8\e\9\t\q\d\1\u\1\h\9\y\n\1\c\b\e\g\h\p\h\l\0\f\k\h\d\y\l\5\o\u\6\b\e\f\7\g\9\1\h\r\o\w\g\t\a\g\3\n\e\f\a\7\j\p\r\c\x\b\a\t\z\t\n\l\0\e\2\j\z\n\9\2\t\p\a\h\v\k\h\e\f\o\o\z\1\v\2\r\p\v\c\4\x\t\y\8\c\y\6\2\c\e\c\r\k\d\e\j\t\u\x\f\r\u\f\1\n\o\n\5\4\a\q\e\4\1\4\u\l\8\p\1\s\t\4\v\j\u\d\e\8\q\r\f\d\0\5\x\c\d\8\5\d\s\y\w\b\d\z\h\a\c\g\8\x\a\x\2\q\l\n\n\1\n\h\z\1\4\i\t\2\v\8\o\u\e\0\s\b\1\r\4\z\5\y\r\q\b\5\o\o\7\x\y\z\9\e\0\g\6\e\t\s\u\c\s\y\x\d\f\q\i\p\q\0\b\2\z\l\q\w\y\b\a\5\g\g ]] 00:08:52.415 06:47:56 -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:52.982 06:47:57 -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:08:52.982 06:47:57 -- dd/uring.sh@75 -- # gen_conf 00:08:52.982 06:47:57 -- dd/common.sh@31 -- # xtrace_disable 00:08:52.982 06:47:57 -- common/autotest_common.sh@10 -- # set +x 00:08:52.982 [2024-12-13 06:47:57.260923] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:52.982 [2024-12-13 06:47:57.261023] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71136 ] 00:08:52.982 { 00:08:52.982 "subsystems": [ 00:08:52.982 { 00:08:52.982 "subsystem": "bdev", 00:08:52.982 "config": [ 00:08:52.982 { 00:08:52.982 "params": { 00:08:52.982 "block_size": 512, 00:08:52.982 "num_blocks": 1048576, 00:08:52.982 "name": "malloc0" 00:08:52.982 }, 00:08:52.982 "method": "bdev_malloc_create" 00:08:52.982 }, 00:08:52.982 { 00:08:52.982 "params": { 00:08:52.982 "filename": "/dev/zram1", 00:08:52.982 "name": "uring0" 00:08:52.982 }, 00:08:52.982 "method": "bdev_uring_create" 00:08:52.982 }, 00:08:52.982 { 00:08:52.982 "method": "bdev_wait_for_examine" 00:08:52.982 } 00:08:52.982 ] 00:08:52.982 } 00:08:52.982 ] 00:08:52.982 } 00:08:52.982 [2024-12-13 06:47:57.393188] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.982 [2024-12-13 06:47:57.422611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.359  [2024-12-13T06:47:59.814Z] Copying: 175/512 [MB] (175 MBps) [2024-12-13T06:48:00.751Z] Copying: 348/512 [MB] (172 MBps) [2024-12-13T06:48:00.751Z] Copying: 510/512 [MB] (162 MBps) [2024-12-13T06:48:01.078Z] Copying: 512/512 [MB] (average 170 MBps) 00:08:56.559 00:08:56.559 06:48:00 -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:08:56.559 06:48:00 -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:08:56.559 06:48:00 -- dd/uring.sh@87 -- # : 00:08:56.559 06:48:00 -- dd/uring.sh@87 -- # : 00:08:56.559 06:48:00 -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:08:56.559 06:48:00 -- dd/uring.sh@87 -- # gen_conf 00:08:56.559 06:48:00 -- dd/common.sh@31 -- # xtrace_disable 00:08:56.559 06:48:00 -- common/autotest_common.sh@10 -- # set +x 00:08:56.559 [2024-12-13 06:48:00.828437] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:56.559 [2024-12-13 06:48:00.828546] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71186 ] 00:08:56.559 { 00:08:56.559 "subsystems": [ 00:08:56.559 { 00:08:56.559 "subsystem": "bdev", 00:08:56.559 "config": [ 00:08:56.559 { 00:08:56.559 "params": { 00:08:56.559 "block_size": 512, 00:08:56.559 "num_blocks": 1048576, 00:08:56.559 "name": "malloc0" 00:08:56.559 }, 00:08:56.559 "method": "bdev_malloc_create" 00:08:56.559 }, 00:08:56.559 { 00:08:56.559 "params": { 00:08:56.559 "filename": "/dev/zram1", 00:08:56.559 "name": "uring0" 00:08:56.559 }, 00:08:56.559 "method": "bdev_uring_create" 00:08:56.559 }, 00:08:56.559 { 00:08:56.559 "params": { 00:08:56.559 "name": "uring0" 00:08:56.559 }, 00:08:56.559 "method": "bdev_uring_delete" 00:08:56.559 }, 00:08:56.559 { 00:08:56.559 "method": "bdev_wait_for_examine" 00:08:56.559 } 00:08:56.559 ] 00:08:56.559 } 00:08:56.559 ] 00:08:56.559 } 00:08:56.559 [2024-12-13 06:48:00.967298] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.559 [2024-12-13 06:48:00.997463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.818  [2024-12-13T06:48:01.595Z] Copying: 0/0 [B] (average 0 Bps) 00:08:57.076 00:08:57.076 06:48:01 -- dd/uring.sh@94 -- # : 00:08:57.076 06:48:01 -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:57.076 06:48:01 -- dd/uring.sh@94 -- # gen_conf 00:08:57.076 06:48:01 -- common/autotest_common.sh@650 -- # local es=0 00:08:57.076 06:48:01 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:57.076 06:48:01 -- dd/common.sh@31 -- # xtrace_disable 00:08:57.076 06:48:01 -- common/autotest_common.sh@10 -- # set +x 00:08:57.076 06:48:01 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:57.076 06:48:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:57.076 06:48:01 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:57.076 06:48:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:57.077 06:48:01 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:57.077 06:48:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:57.077 06:48:01 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:57.077 06:48:01 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:57.077 06:48:01 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:57.077 [2024-12-13 06:48:01.428959] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:57.077 [2024-12-13 06:48:01.429062] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71209 ] 00:08:57.077 { 00:08:57.077 "subsystems": [ 00:08:57.077 { 00:08:57.077 "subsystem": "bdev", 00:08:57.077 "config": [ 00:08:57.077 { 00:08:57.077 "params": { 00:08:57.077 "block_size": 512, 00:08:57.077 "num_blocks": 1048576, 00:08:57.077 "name": "malloc0" 00:08:57.077 }, 00:08:57.077 "method": "bdev_malloc_create" 00:08:57.077 }, 00:08:57.077 { 00:08:57.077 "params": { 00:08:57.077 "filename": "/dev/zram1", 00:08:57.077 "name": "uring0" 00:08:57.077 }, 00:08:57.077 "method": "bdev_uring_create" 00:08:57.077 }, 00:08:57.077 { 00:08:57.077 "params": { 00:08:57.077 "name": "uring0" 00:08:57.077 }, 00:08:57.077 "method": "bdev_uring_delete" 00:08:57.077 }, 00:08:57.077 { 00:08:57.077 "method": "bdev_wait_for_examine" 00:08:57.077 } 00:08:57.077 ] 00:08:57.077 } 00:08:57.077 ] 00:08:57.077 } 00:08:57.077 [2024-12-13 06:48:01.572096] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.336 [2024-12-13 06:48:01.607033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.336 [2024-12-13 06:48:01.750329] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:08:57.336 [2024-12-13 06:48:01.750408] spdk_dd.c: 932:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:08:57.336 [2024-12-13 06:48:01.750435] spdk_dd.c:1074:dd_run: *ERROR*: uring0: No such device 00:08:57.336 [2024-12-13 06:48:01.750444] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:57.594 [2024-12-13 06:48:01.908888] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:57.594 06:48:01 -- common/autotest_common.sh@653 -- # es=237 00:08:57.594 06:48:01 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:57.594 06:48:01 -- common/autotest_common.sh@662 -- # es=109 00:08:57.594 06:48:01 -- common/autotest_common.sh@663 -- # case "$es" in 00:08:57.594 06:48:01 -- common/autotest_common.sh@670 -- # es=1 00:08:57.594 06:48:01 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:57.594 06:48:01 -- dd/uring.sh@99 -- # remove_zram_dev 1 00:08:57.594 06:48:01 -- dd/common.sh@172 -- # local id=1 00:08:57.594 06:48:01 -- dd/common.sh@174 -- # [[ -e /sys/block/zram1 ]] 00:08:57.594 06:48:01 -- dd/common.sh@176 -- # echo 1 00:08:57.594 06:48:01 -- dd/common.sh@177 -- # echo 1 00:08:57.594 06:48:01 -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:57.852 00:08:57.853 real 0m12.915s 00:08:57.853 user 0m7.375s 00:08:57.853 sys 0m4.903s 00:08:57.853 ************************************ 00:08:57.853 END TEST dd_uring_copy 00:08:57.853 06:48:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:57.853 06:48:02 -- common/autotest_common.sh@10 -- # set +x 00:08:57.853 ************************************ 00:08:57.853 00:08:57.853 real 0m13.155s 00:08:57.853 user 0m7.504s 00:08:57.853 sys 0m5.021s 00:08:57.853 06:48:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:57.853 06:48:02 -- common/autotest_common.sh@10 -- # set +x 00:08:57.853 ************************************ 00:08:57.853 END TEST spdk_dd_uring 00:08:57.853 ************************************ 00:08:58.112 06:48:02 -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:08:58.112 06:48:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:58.112 06:48:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:58.112 06:48:02 -- common/autotest_common.sh@10 -- # set +x 00:08:58.112 ************************************ 00:08:58.112 START TEST spdk_dd_sparse 00:08:58.112 ************************************ 00:08:58.112 06:48:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:08:58.112 * Looking for test storage... 00:08:58.112 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:58.112 06:48:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:58.112 06:48:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:58.112 06:48:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:58.112 06:48:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:58.112 06:48:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:58.112 06:48:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:58.112 06:48:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:58.112 06:48:02 -- scripts/common.sh@335 -- # IFS=.-: 00:08:58.112 06:48:02 -- scripts/common.sh@335 -- # read -ra ver1 00:08:58.112 06:48:02 -- scripts/common.sh@336 -- # IFS=.-: 00:08:58.112 06:48:02 -- scripts/common.sh@336 -- # read -ra ver2 00:08:58.112 06:48:02 -- scripts/common.sh@337 -- # local 'op=<' 00:08:58.112 06:48:02 -- scripts/common.sh@339 -- # ver1_l=2 00:08:58.112 06:48:02 -- scripts/common.sh@340 -- # ver2_l=1 00:08:58.112 06:48:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:58.112 06:48:02 -- scripts/common.sh@343 -- # case "$op" in 00:08:58.112 06:48:02 -- scripts/common.sh@344 -- # : 1 00:08:58.112 06:48:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:58.112 06:48:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:58.112 06:48:02 -- scripts/common.sh@364 -- # decimal 1 00:08:58.112 06:48:02 -- scripts/common.sh@352 -- # local d=1 00:08:58.112 06:48:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:58.112 06:48:02 -- scripts/common.sh@354 -- # echo 1 00:08:58.112 06:48:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:58.112 06:48:02 -- scripts/common.sh@365 -- # decimal 2 00:08:58.112 06:48:02 -- scripts/common.sh@352 -- # local d=2 00:08:58.112 06:48:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:58.112 06:48:02 -- scripts/common.sh@354 -- # echo 2 00:08:58.112 06:48:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:58.112 06:48:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:58.112 06:48:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:58.112 06:48:02 -- scripts/common.sh@367 -- # return 0 00:08:58.112 06:48:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:58.112 06:48:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:58.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.112 --rc genhtml_branch_coverage=1 00:08:58.112 --rc genhtml_function_coverage=1 00:08:58.112 --rc genhtml_legend=1 00:08:58.112 --rc geninfo_all_blocks=1 00:08:58.112 --rc geninfo_unexecuted_blocks=1 00:08:58.112 00:08:58.112 ' 00:08:58.112 06:48:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:58.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.112 --rc genhtml_branch_coverage=1 00:08:58.112 --rc genhtml_function_coverage=1 00:08:58.112 --rc genhtml_legend=1 00:08:58.112 --rc geninfo_all_blocks=1 00:08:58.112 --rc geninfo_unexecuted_blocks=1 00:08:58.112 00:08:58.112 ' 00:08:58.112 06:48:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:58.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.112 --rc genhtml_branch_coverage=1 00:08:58.112 --rc genhtml_function_coverage=1 00:08:58.112 --rc genhtml_legend=1 00:08:58.112 --rc geninfo_all_blocks=1 00:08:58.112 --rc geninfo_unexecuted_blocks=1 00:08:58.112 00:08:58.112 ' 00:08:58.112 06:48:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:58.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.112 --rc genhtml_branch_coverage=1 00:08:58.112 --rc genhtml_function_coverage=1 00:08:58.112 --rc genhtml_legend=1 00:08:58.112 --rc geninfo_all_blocks=1 00:08:58.112 --rc geninfo_unexecuted_blocks=1 00:08:58.112 00:08:58.112 ' 00:08:58.112 06:48:02 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:58.112 06:48:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:58.112 06:48:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:58.112 06:48:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:58.113 06:48:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.113 06:48:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.113 06:48:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.113 06:48:02 -- paths/export.sh@5 -- # export PATH 00:08:58.113 06:48:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.113 06:48:02 -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:08:58.113 06:48:02 -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:08:58.113 06:48:02 -- dd/sparse.sh@110 -- # file1=file_zero1 00:08:58.113 06:48:02 -- dd/sparse.sh@111 -- # file2=file_zero2 00:08:58.113 06:48:02 -- dd/sparse.sh@112 -- # file3=file_zero3 00:08:58.113 06:48:02 -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:08:58.113 06:48:02 -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:08:58.113 06:48:02 -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:08:58.113 06:48:02 -- dd/sparse.sh@118 -- # prepare 00:08:58.113 06:48:02 -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:08:58.113 06:48:02 -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:08:58.113 1+0 records in 00:08:58.113 1+0 records out 00:08:58.113 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00590336 s, 710 MB/s 00:08:58.113 06:48:02 -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:08:58.113 1+0 records in 00:08:58.113 1+0 records out 00:08:58.113 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00380069 s, 1.1 GB/s 00:08:58.113 06:48:02 -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:08:58.113 1+0 records in 00:08:58.113 1+0 records out 00:08:58.113 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.006734 s, 623 MB/s 00:08:58.113 06:48:02 -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:08:58.113 06:48:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:58.113 06:48:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:58.113 06:48:02 -- common/autotest_common.sh@10 -- # set +x 00:08:58.113 ************************************ 00:08:58.113 START TEST dd_sparse_file_to_file 00:08:58.113 ************************************ 00:08:58.113 06:48:02 -- common/autotest_common.sh@1114 -- # file_to_file 00:08:58.113 06:48:02 -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:08:58.113 06:48:02 -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:08:58.113 06:48:02 -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:58.113 06:48:02 -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:08:58.113 06:48:02 -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:08:58.113 06:48:02 -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:08:58.113 06:48:02 -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:08:58.113 06:48:02 -- dd/sparse.sh@41 -- # gen_conf 00:08:58.113 06:48:02 -- dd/common.sh@31 -- # xtrace_disable 00:08:58.113 06:48:02 -- common/autotest_common.sh@10 -- # set +x 00:08:58.372 [2024-12-13 06:48:02.676516] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:58.372 [2024-12-13 06:48:02.676633] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71306 ] 00:08:58.372 { 00:08:58.372 "subsystems": [ 00:08:58.372 { 00:08:58.372 "subsystem": "bdev", 00:08:58.372 "config": [ 00:08:58.372 { 00:08:58.372 "params": { 00:08:58.372 "block_size": 4096, 00:08:58.372 "filename": "dd_sparse_aio_disk", 00:08:58.372 "name": "dd_aio" 00:08:58.372 }, 00:08:58.372 "method": "bdev_aio_create" 00:08:58.372 }, 00:08:58.372 { 00:08:58.372 "params": { 00:08:58.372 "lvs_name": "dd_lvstore", 00:08:58.372 "bdev_name": "dd_aio" 00:08:58.372 }, 00:08:58.372 "method": "bdev_lvol_create_lvstore" 00:08:58.372 }, 00:08:58.372 { 00:08:58.372 "method": "bdev_wait_for_examine" 00:08:58.372 } 00:08:58.372 ] 00:08:58.372 } 00:08:58.372 ] 00:08:58.372 } 00:08:58.372 [2024-12-13 06:48:02.816855] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.372 [2024-12-13 06:48:02.857539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.631  [2024-12-13T06:48:03.150Z] Copying: 12/36 [MB] (average 1333 MBps) 00:08:58.631 00:08:58.631 06:48:03 -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:08:58.631 06:48:03 -- dd/sparse.sh@47 -- # stat1_s=37748736 00:08:58.631 06:48:03 -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:08:58.631 06:48:03 -- dd/sparse.sh@48 -- # stat2_s=37748736 00:08:58.631 06:48:03 -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:08:58.631 06:48:03 -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:08:58.890 06:48:03 -- dd/sparse.sh@52 -- # stat1_b=24576 00:08:58.890 06:48:03 -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:08:58.890 06:48:03 -- dd/sparse.sh@53 -- # stat2_b=24576 00:08:58.890 06:48:03 -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:08:58.890 00:08:58.890 real 0m0.533s 00:08:58.890 user 0m0.292s 00:08:58.890 sys 0m0.154s 00:08:58.890 06:48:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:58.890 ************************************ 00:08:58.890 END TEST dd_sparse_file_to_file 00:08:58.890 ************************************ 00:08:58.890 06:48:03 -- common/autotest_common.sh@10 -- # set +x 00:08:58.890 06:48:03 -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:08:58.890 06:48:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:58.890 06:48:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:58.890 06:48:03 -- common/autotest_common.sh@10 -- # set +x 00:08:58.890 ************************************ 00:08:58.890 START TEST dd_sparse_file_to_bdev 00:08:58.890 ************************************ 00:08:58.890 06:48:03 -- common/autotest_common.sh@1114 -- # file_to_bdev 00:08:58.890 06:48:03 -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:58.890 06:48:03 -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:08:58.890 06:48:03 -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size']='37748736' ['thin_provision']='true') 00:08:58.890 06:48:03 -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:08:58.890 06:48:03 -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:08:58.890 06:48:03 -- dd/sparse.sh@73 -- # gen_conf 00:08:58.890 06:48:03 -- dd/common.sh@31 -- # xtrace_disable 00:08:58.890 06:48:03 -- common/autotest_common.sh@10 -- # set +x 00:08:58.890 [2024-12-13 06:48:03.255612] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:58.890 [2024-12-13 06:48:03.255722] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71347 ] 00:08:58.890 { 00:08:58.890 "subsystems": [ 00:08:58.890 { 00:08:58.890 "subsystem": "bdev", 00:08:58.890 "config": [ 00:08:58.890 { 00:08:58.890 "params": { 00:08:58.890 "block_size": 4096, 00:08:58.890 "filename": "dd_sparse_aio_disk", 00:08:58.890 "name": "dd_aio" 00:08:58.890 }, 00:08:58.890 "method": "bdev_aio_create" 00:08:58.890 }, 00:08:58.890 { 00:08:58.890 "params": { 00:08:58.890 "lvs_name": "dd_lvstore", 00:08:58.890 "lvol_name": "dd_lvol", 00:08:58.890 "size": 37748736, 00:08:58.890 "thin_provision": true 00:08:58.890 }, 00:08:58.890 "method": "bdev_lvol_create" 00:08:58.891 }, 00:08:58.891 { 00:08:58.891 "method": "bdev_wait_for_examine" 00:08:58.891 } 00:08:58.891 ] 00:08:58.891 } 00:08:58.891 ] 00:08:58.891 } 00:08:58.891 [2024-12-13 06:48:03.394638] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.149 [2024-12-13 06:48:03.435318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.149 [2024-12-13 06:48:03.498525] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:08:59.149  [2024-12-13T06:48:03.668Z] Copying: 12/36 [MB] (average 571 MBps)[2024-12-13 06:48:03.536961] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:08:59.406 00:08:59.407 00:08:59.407 00:08:59.407 real 0m0.510s 00:08:59.407 user 0m0.308s 00:08:59.407 sys 0m0.131s 00:08:59.407 06:48:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:59.407 06:48:03 -- common/autotest_common.sh@10 -- # set +x 00:08:59.407 ************************************ 00:08:59.407 END TEST dd_sparse_file_to_bdev 00:08:59.407 ************************************ 00:08:59.407 06:48:03 -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:08:59.407 06:48:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:59.407 06:48:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:59.407 06:48:03 -- common/autotest_common.sh@10 -- # set +x 00:08:59.407 ************************************ 00:08:59.407 START TEST dd_sparse_bdev_to_file 00:08:59.407 ************************************ 00:08:59.407 06:48:03 -- common/autotest_common.sh@1114 -- # bdev_to_file 00:08:59.407 06:48:03 -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:08:59.407 06:48:03 -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:08:59.407 06:48:03 -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:59.407 06:48:03 -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:08:59.407 06:48:03 -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:08:59.407 06:48:03 -- dd/sparse.sh@91 -- # gen_conf 00:08:59.407 06:48:03 -- dd/common.sh@31 -- # xtrace_disable 00:08:59.407 06:48:03 -- common/autotest_common.sh@10 -- # set +x 00:08:59.407 [2024-12-13 06:48:03.820938] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:59.407 [2024-12-13 06:48:03.821044] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71378 ] 00:08:59.407 { 00:08:59.407 "subsystems": [ 00:08:59.407 { 00:08:59.407 "subsystem": "bdev", 00:08:59.407 "config": [ 00:08:59.407 { 00:08:59.407 "params": { 00:08:59.407 "block_size": 4096, 00:08:59.407 "filename": "dd_sparse_aio_disk", 00:08:59.407 "name": "dd_aio" 00:08:59.407 }, 00:08:59.407 "method": "bdev_aio_create" 00:08:59.407 }, 00:08:59.407 { 00:08:59.407 "method": "bdev_wait_for_examine" 00:08:59.407 } 00:08:59.407 ] 00:08:59.407 } 00:08:59.407 ] 00:08:59.407 } 00:08:59.665 [2024-12-13 06:48:03.961885] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.665 [2024-12-13 06:48:04.000991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.665  [2024-12-13T06:48:04.443Z] Copying: 12/36 [MB] (average 1200 MBps) 00:08:59.924 00:08:59.924 06:48:04 -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:08:59.924 06:48:04 -- dd/sparse.sh@97 -- # stat2_s=37748736 00:08:59.924 06:48:04 -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:08:59.924 06:48:04 -- dd/sparse.sh@98 -- # stat3_s=37748736 00:08:59.924 06:48:04 -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:08:59.924 06:48:04 -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:08:59.924 06:48:04 -- dd/sparse.sh@102 -- # stat2_b=24576 00:08:59.924 06:48:04 -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:08:59.924 06:48:04 -- dd/sparse.sh@103 -- # stat3_b=24576 00:08:59.924 06:48:04 -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:08:59.924 00:08:59.924 real 0m0.511s 00:08:59.924 user 0m0.293s 00:08:59.924 sys 0m0.140s 00:08:59.924 06:48:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:59.924 ************************************ 00:08:59.924 END TEST dd_sparse_bdev_to_file 00:08:59.924 ************************************ 00:08:59.924 06:48:04 -- common/autotest_common.sh@10 -- # set +x 00:08:59.924 06:48:04 -- dd/sparse.sh@1 -- # cleanup 00:08:59.924 06:48:04 -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:08:59.924 06:48:04 -- dd/sparse.sh@12 -- # rm file_zero1 00:08:59.924 06:48:04 -- dd/sparse.sh@13 -- # rm file_zero2 00:08:59.924 06:48:04 -- dd/sparse.sh@14 -- # rm file_zero3 00:08:59.924 ************************************ 00:08:59.924 END TEST spdk_dd_sparse 00:08:59.924 ************************************ 00:08:59.924 00:08:59.924 real 0m1.954s 00:08:59.924 user 0m1.061s 00:08:59.924 sys 0m0.655s 00:08:59.924 06:48:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:59.924 06:48:04 -- common/autotest_common.sh@10 -- # set +x 00:08:59.924 06:48:04 -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:08:59.924 06:48:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:59.924 06:48:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:59.924 06:48:04 -- common/autotest_common.sh@10 -- # set +x 00:08:59.924 ************************************ 00:08:59.924 START TEST spdk_dd_negative 00:08:59.924 ************************************ 00:08:59.924 06:48:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:09:00.185 * Looking for test storage... 00:09:00.185 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:00.185 06:48:04 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:00.185 06:48:04 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:00.185 06:48:04 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:00.185 06:48:04 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:00.185 06:48:04 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:00.185 06:48:04 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:00.185 06:48:04 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:00.185 06:48:04 -- scripts/common.sh@335 -- # IFS=.-: 00:09:00.185 06:48:04 -- scripts/common.sh@335 -- # read -ra ver1 00:09:00.185 06:48:04 -- scripts/common.sh@336 -- # IFS=.-: 00:09:00.185 06:48:04 -- scripts/common.sh@336 -- # read -ra ver2 00:09:00.185 06:48:04 -- scripts/common.sh@337 -- # local 'op=<' 00:09:00.185 06:48:04 -- scripts/common.sh@339 -- # ver1_l=2 00:09:00.185 06:48:04 -- scripts/common.sh@340 -- # ver2_l=1 00:09:00.185 06:48:04 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:00.185 06:48:04 -- scripts/common.sh@343 -- # case "$op" in 00:09:00.185 06:48:04 -- scripts/common.sh@344 -- # : 1 00:09:00.185 06:48:04 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:00.185 06:48:04 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:00.185 06:48:04 -- scripts/common.sh@364 -- # decimal 1 00:09:00.185 06:48:04 -- scripts/common.sh@352 -- # local d=1 00:09:00.185 06:48:04 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:00.185 06:48:04 -- scripts/common.sh@354 -- # echo 1 00:09:00.185 06:48:04 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:00.185 06:48:04 -- scripts/common.sh@365 -- # decimal 2 00:09:00.185 06:48:04 -- scripts/common.sh@352 -- # local d=2 00:09:00.185 06:48:04 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:00.185 06:48:04 -- scripts/common.sh@354 -- # echo 2 00:09:00.185 06:48:04 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:00.185 06:48:04 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:00.185 06:48:04 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:00.185 06:48:04 -- scripts/common.sh@367 -- # return 0 00:09:00.185 06:48:04 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:00.185 06:48:04 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:00.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.185 --rc genhtml_branch_coverage=1 00:09:00.185 --rc genhtml_function_coverage=1 00:09:00.185 --rc genhtml_legend=1 00:09:00.185 --rc geninfo_all_blocks=1 00:09:00.185 --rc geninfo_unexecuted_blocks=1 00:09:00.185 00:09:00.185 ' 00:09:00.185 06:48:04 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:00.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.185 --rc genhtml_branch_coverage=1 00:09:00.185 --rc genhtml_function_coverage=1 00:09:00.185 --rc genhtml_legend=1 00:09:00.185 --rc geninfo_all_blocks=1 00:09:00.185 --rc geninfo_unexecuted_blocks=1 00:09:00.185 00:09:00.185 ' 00:09:00.185 06:48:04 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:00.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.185 --rc genhtml_branch_coverage=1 00:09:00.185 --rc genhtml_function_coverage=1 00:09:00.185 --rc genhtml_legend=1 00:09:00.185 --rc geninfo_all_blocks=1 00:09:00.185 --rc geninfo_unexecuted_blocks=1 00:09:00.185 00:09:00.185 ' 00:09:00.185 06:48:04 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:00.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.185 --rc genhtml_branch_coverage=1 00:09:00.185 --rc genhtml_function_coverage=1 00:09:00.185 --rc genhtml_legend=1 00:09:00.185 --rc geninfo_all_blocks=1 00:09:00.185 --rc geninfo_unexecuted_blocks=1 00:09:00.185 00:09:00.185 ' 00:09:00.185 06:48:04 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:00.185 06:48:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:00.185 06:48:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:00.185 06:48:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:00.185 06:48:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.185 06:48:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.185 06:48:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.185 06:48:04 -- paths/export.sh@5 -- # export PATH 00:09:00.185 06:48:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.185 06:48:04 -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:00.185 06:48:04 -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:00.185 06:48:04 -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:00.185 06:48:04 -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:00.185 06:48:04 -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:09:00.185 06:48:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:00.185 06:48:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:00.185 06:48:04 -- common/autotest_common.sh@10 -- # set +x 00:09:00.185 ************************************ 00:09:00.185 START TEST dd_invalid_arguments 00:09:00.185 ************************************ 00:09:00.185 06:48:04 -- common/autotest_common.sh@1114 -- # invalid_arguments 00:09:00.185 06:48:04 -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:09:00.185 06:48:04 -- common/autotest_common.sh@650 -- # local es=0 00:09:00.185 06:48:04 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:09:00.185 06:48:04 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.185 06:48:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:00.185 06:48:04 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.185 06:48:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:00.185 06:48:04 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.185 06:48:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:00.185 06:48:04 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.185 06:48:04 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:00.185 06:48:04 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:09:00.185 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:09:00.185 options: 00:09:00.185 -c, --config JSON config file (default none) 00:09:00.185 --json JSON config file (default none) 00:09:00.185 --json-ignore-init-errors 00:09:00.185 don't exit on invalid config entry 00:09:00.185 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:09:00.185 -g, --single-file-segments 00:09:00.185 force creating just one hugetlbfs file 00:09:00.185 -h, --help show this usage 00:09:00.185 -i, --shm-id shared memory ID (optional) 00:09:00.185 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:09:00.185 --lcores lcore to CPU mapping list. The list is in the format: 00:09:00.185 [<,lcores[@CPUs]>...] 00:09:00.185 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:09:00.185 Within the group, '-' is used for range separator, 00:09:00.185 ',' is used for single number separator. 00:09:00.185 '( )' can be omitted for single element group, 00:09:00.185 '@' can be omitted if cpus and lcores have the same value 00:09:00.185 -n, --mem-channels channel number of memory channels used for DPDK 00:09:00.185 -p, --main-core main (primary) core for DPDK 00:09:00.185 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:09:00.185 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:09:00.185 --disable-cpumask-locks Disable CPU core lock files. 00:09:00.185 --silence-noticelog disable notice level logging to stderr 00:09:00.185 --msg-mempool-size global message memory pool size in count (default: 262143) 00:09:00.185 -u, --no-pci disable PCI access 00:09:00.185 --wait-for-rpc wait for RPCs to initialize subsystems 00:09:00.185 --max-delay maximum reactor delay (in microseconds) 00:09:00.185 -B, --pci-blocked pci addr to block (can be used more than once) 00:09:00.185 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:09:00.185 -R, --huge-unlink unlink huge files after initialization 00:09:00.185 -v, --version print SPDK version 00:09:00.185 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:09:00.185 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:09:00.185 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:09:00.186 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:09:00.186 Tracepoints vary in size and can use more than one trace entry. 00:09:00.186 --rpcs-allowed comma-separated list of permitted RPCS 00:09:00.186 --env-context Opaque context for use of the env implementation 00:09:00.186 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:09:00.186 --no-huge run without using hugepages 00:09:00.186 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, json_util, log, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, nvme_cuse, nvme_vfio, opal, reactor, rpc, rpc_client, sock, sock_posix, thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:09:00.186 -e, --tpoint-group [:] 00:09:00.186 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, all) 00:09:00.186 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:09:00.186 Groups and masks /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:09:00.186 [2024-12-13 06:48:04.663569] spdk_dd.c:1460:main: *ERROR*: Invalid arguments 00:09:00.186 can be combined (e.g. thread,bdev:0x1). 00:09:00.186 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:09:00.186 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:09:00.186 [--------- DD Options ---------] 00:09:00.186 --if Input file. Must specify either --if or --ib. 00:09:00.186 --ib Input bdev. Must specifier either --if or --ib 00:09:00.186 --of Output file. Must specify either --of or --ob. 00:09:00.186 --ob Output bdev. Must specify either --of or --ob. 00:09:00.186 --iflag Input file flags. 00:09:00.186 --oflag Output file flags. 00:09:00.186 --bs I/O unit size (default: 4096) 00:09:00.186 --qd Queue depth (default: 2) 00:09:00.186 --count I/O unit count. The number of I/O units to copy. (default: all) 00:09:00.186 --skip Skip this many I/O units at start of input. (default: 0) 00:09:00.186 --seek Skip this many I/O units at start of output. (default: 0) 00:09:00.186 --aio Force usage of AIO. (by default io_uring is used if available) 00:09:00.186 --sparse Enable hole skipping in input target 00:09:00.186 Available iflag and oflag values: 00:09:00.186 append - append mode 00:09:00.186 direct - use direct I/O for data 00:09:00.186 directory - fail unless a directory 00:09:00.186 dsync - use synchronized I/O for data 00:09:00.186 noatime - do not update access time 00:09:00.186 noctty - do not assign controlling terminal from file 00:09:00.186 nofollow - do not follow symlinks 00:09:00.186 nonblock - use non-blocking I/O 00:09:00.186 sync - use synchronized I/O for data and metadata 00:09:00.186 06:48:04 -- common/autotest_common.sh@653 -- # es=2 00:09:00.186 06:48:04 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:00.186 06:48:04 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:00.186 06:48:04 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:00.186 00:09:00.186 real 0m0.064s 00:09:00.186 user 0m0.038s 00:09:00.186 sys 0m0.024s 00:09:00.186 06:48:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:00.186 06:48:04 -- common/autotest_common.sh@10 -- # set +x 00:09:00.186 ************************************ 00:09:00.186 END TEST dd_invalid_arguments 00:09:00.186 ************************************ 00:09:00.445 06:48:04 -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:09:00.445 06:48:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:00.445 06:48:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:00.445 06:48:04 -- common/autotest_common.sh@10 -- # set +x 00:09:00.445 ************************************ 00:09:00.445 START TEST dd_double_input 00:09:00.445 ************************************ 00:09:00.445 06:48:04 -- common/autotest_common.sh@1114 -- # double_input 00:09:00.445 06:48:04 -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:09:00.445 06:48:04 -- common/autotest_common.sh@650 -- # local es=0 00:09:00.445 06:48:04 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:09:00.445 06:48:04 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.445 06:48:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:00.445 06:48:04 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.445 06:48:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:00.445 06:48:04 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.445 06:48:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:00.445 06:48:04 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.445 06:48:04 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:00.445 06:48:04 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:09:00.445 [2024-12-13 06:48:04.783675] spdk_dd.c:1467:main: *ERROR*: You may specify either --if or --ib, but not both. 00:09:00.445 06:48:04 -- common/autotest_common.sh@653 -- # es=22 00:09:00.445 06:48:04 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:00.445 06:48:04 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:00.445 ************************************ 00:09:00.445 END TEST dd_double_input 00:09:00.445 ************************************ 00:09:00.445 06:48:04 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:00.445 00:09:00.445 real 0m0.069s 00:09:00.445 user 0m0.044s 00:09:00.445 sys 0m0.022s 00:09:00.445 06:48:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:00.445 06:48:04 -- common/autotest_common.sh@10 -- # set +x 00:09:00.445 06:48:04 -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:09:00.445 06:48:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:00.445 06:48:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:00.445 06:48:04 -- common/autotest_common.sh@10 -- # set +x 00:09:00.445 ************************************ 00:09:00.445 START TEST dd_double_output 00:09:00.445 ************************************ 00:09:00.445 06:48:04 -- common/autotest_common.sh@1114 -- # double_output 00:09:00.445 06:48:04 -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:09:00.445 06:48:04 -- common/autotest_common.sh@650 -- # local es=0 00:09:00.445 06:48:04 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:09:00.445 06:48:04 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.445 06:48:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:00.445 06:48:04 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.445 06:48:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:00.445 06:48:04 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.445 06:48:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:00.445 06:48:04 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.445 06:48:04 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:00.445 06:48:04 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:09:00.445 [2024-12-13 06:48:04.907437] spdk_dd.c:1473:main: *ERROR*: You may specify either --of or --ob, but not both. 00:09:00.445 06:48:04 -- common/autotest_common.sh@653 -- # es=22 00:09:00.445 06:48:04 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:00.445 06:48:04 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:00.445 06:48:04 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:00.445 00:09:00.445 real 0m0.066s 00:09:00.445 user 0m0.037s 00:09:00.445 sys 0m0.028s 00:09:00.445 06:48:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:00.445 ************************************ 00:09:00.445 END TEST dd_double_output 00:09:00.445 ************************************ 00:09:00.445 06:48:04 -- common/autotest_common.sh@10 -- # set +x 00:09:00.705 06:48:04 -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:09:00.705 06:48:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:00.705 06:48:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:00.705 06:48:04 -- common/autotest_common.sh@10 -- # set +x 00:09:00.705 ************************************ 00:09:00.705 START TEST dd_no_input 00:09:00.705 ************************************ 00:09:00.705 06:48:04 -- common/autotest_common.sh@1114 -- # no_input 00:09:00.705 06:48:04 -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:09:00.705 06:48:04 -- common/autotest_common.sh@650 -- # local es=0 00:09:00.705 06:48:04 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:09:00.705 06:48:04 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.705 06:48:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:00.705 06:48:04 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.705 06:48:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:00.705 06:48:04 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.705 06:48:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:00.705 06:48:04 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.705 06:48:04 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:00.705 06:48:04 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:09:00.705 [2024-12-13 06:48:05.031414] spdk_dd.c:1479:main: *ERROR*: You must specify either --if or --ib 00:09:00.705 06:48:05 -- common/autotest_common.sh@653 -- # es=22 00:09:00.705 06:48:05 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:00.705 06:48:05 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:00.705 06:48:05 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:00.705 00:09:00.705 real 0m0.070s 00:09:00.705 user 0m0.050s 00:09:00.705 sys 0m0.019s 00:09:00.705 06:48:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:00.705 06:48:05 -- common/autotest_common.sh@10 -- # set +x 00:09:00.705 ************************************ 00:09:00.705 END TEST dd_no_input 00:09:00.705 ************************************ 00:09:00.705 06:48:05 -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:09:00.705 06:48:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:00.705 06:48:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:00.705 06:48:05 -- common/autotest_common.sh@10 -- # set +x 00:09:00.705 ************************************ 00:09:00.705 START TEST dd_no_output 00:09:00.705 ************************************ 00:09:00.705 06:48:05 -- common/autotest_common.sh@1114 -- # no_output 00:09:00.705 06:48:05 -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:00.705 06:48:05 -- common/autotest_common.sh@650 -- # local es=0 00:09:00.705 06:48:05 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:00.705 06:48:05 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.705 06:48:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:00.705 06:48:05 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.705 06:48:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:00.705 06:48:05 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.705 06:48:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:00.705 06:48:05 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.705 06:48:05 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:00.705 06:48:05 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:00.705 [2024-12-13 06:48:05.148333] spdk_dd.c:1485:main: *ERROR*: You must specify either --of or --ob 00:09:00.705 06:48:05 -- common/autotest_common.sh@653 -- # es=22 00:09:00.705 06:48:05 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:00.705 06:48:05 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:00.705 ************************************ 00:09:00.705 END TEST dd_no_output 00:09:00.705 ************************************ 00:09:00.705 06:48:05 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:00.705 00:09:00.705 real 0m0.065s 00:09:00.705 user 0m0.039s 00:09:00.705 sys 0m0.025s 00:09:00.705 06:48:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:00.705 06:48:05 -- common/autotest_common.sh@10 -- # set +x 00:09:00.705 06:48:05 -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:09:00.705 06:48:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:00.705 06:48:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:00.705 06:48:05 -- common/autotest_common.sh@10 -- # set +x 00:09:00.705 ************************************ 00:09:00.705 START TEST dd_wrong_blocksize 00:09:00.705 ************************************ 00:09:00.705 06:48:05 -- common/autotest_common.sh@1114 -- # wrong_blocksize 00:09:00.705 06:48:05 -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:09:00.705 06:48:05 -- common/autotest_common.sh@650 -- # local es=0 00:09:00.705 06:48:05 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:09:00.705 06:48:05 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.705 06:48:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:00.705 06:48:05 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.705 06:48:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:00.705 06:48:05 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.966 06:48:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:00.966 06:48:05 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.966 06:48:05 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:00.966 06:48:05 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:09:00.966 [2024-12-13 06:48:05.267827] spdk_dd.c:1491:main: *ERROR*: Invalid --bs value 00:09:00.966 ************************************ 00:09:00.966 END TEST dd_wrong_blocksize 00:09:00.966 ************************************ 00:09:00.966 06:48:05 -- common/autotest_common.sh@653 -- # es=22 00:09:00.966 06:48:05 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:00.966 06:48:05 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:00.966 06:48:05 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:00.966 00:09:00.966 real 0m0.065s 00:09:00.966 user 0m0.037s 00:09:00.966 sys 0m0.027s 00:09:00.966 06:48:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:00.966 06:48:05 -- common/autotest_common.sh@10 -- # set +x 00:09:00.966 06:48:05 -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:09:00.966 06:48:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:00.966 06:48:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:00.966 06:48:05 -- common/autotest_common.sh@10 -- # set +x 00:09:00.966 ************************************ 00:09:00.966 START TEST dd_smaller_blocksize 00:09:00.966 ************************************ 00:09:00.966 06:48:05 -- common/autotest_common.sh@1114 -- # smaller_blocksize 00:09:00.966 06:48:05 -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:09:00.966 06:48:05 -- common/autotest_common.sh@650 -- # local es=0 00:09:00.966 06:48:05 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:09:00.966 06:48:05 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.966 06:48:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:00.966 06:48:05 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.966 06:48:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:00.966 06:48:05 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.966 06:48:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:00.966 06:48:05 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.966 06:48:05 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:00.966 06:48:05 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:09:00.966 [2024-12-13 06:48:05.390686] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:00.966 [2024-12-13 06:48:05.390770] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71602 ] 00:09:01.226 [2024-12-13 06:48:05.528820] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.226 [2024-12-13 06:48:05.569430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.226 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:09:01.226 [2024-12-13 06:48:05.617741] spdk_dd.c:1168:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:09:01.226 [2024-12-13 06:48:05.617774] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:01.226 [2024-12-13 06:48:05.679189] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:09:01.485 06:48:05 -- common/autotest_common.sh@653 -- # es=244 00:09:01.485 06:48:05 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:01.485 06:48:05 -- common/autotest_common.sh@662 -- # es=116 00:09:01.485 06:48:05 -- common/autotest_common.sh@663 -- # case "$es" in 00:09:01.485 06:48:05 -- common/autotest_common.sh@670 -- # es=1 00:09:01.485 06:48:05 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:01.485 00:09:01.485 real 0m0.411s 00:09:01.485 user 0m0.197s 00:09:01.485 sys 0m0.109s 00:09:01.485 06:48:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:01.485 ************************************ 00:09:01.485 END TEST dd_smaller_blocksize 00:09:01.485 ************************************ 00:09:01.485 06:48:05 -- common/autotest_common.sh@10 -- # set +x 00:09:01.485 06:48:05 -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:09:01.485 06:48:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:01.485 06:48:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:01.485 06:48:05 -- common/autotest_common.sh@10 -- # set +x 00:09:01.485 ************************************ 00:09:01.485 START TEST dd_invalid_count 00:09:01.485 ************************************ 00:09:01.485 06:48:05 -- common/autotest_common.sh@1114 -- # invalid_count 00:09:01.485 06:48:05 -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:09:01.485 06:48:05 -- common/autotest_common.sh@650 -- # local es=0 00:09:01.485 06:48:05 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:09:01.485 06:48:05 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:01.485 06:48:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:01.485 06:48:05 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:01.485 06:48:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:01.485 06:48:05 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:01.485 06:48:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:01.485 06:48:05 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:01.485 06:48:05 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:01.485 06:48:05 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:09:01.485 [2024-12-13 06:48:05.857874] spdk_dd.c:1497:main: *ERROR*: Invalid --count value 00:09:01.485 06:48:05 -- common/autotest_common.sh@653 -- # es=22 00:09:01.485 06:48:05 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:01.485 06:48:05 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:01.485 06:48:05 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:01.485 00:09:01.485 real 0m0.068s 00:09:01.485 user 0m0.041s 00:09:01.485 sys 0m0.026s 00:09:01.485 06:48:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:01.485 ************************************ 00:09:01.485 END TEST dd_invalid_count 00:09:01.485 ************************************ 00:09:01.485 06:48:05 -- common/autotest_common.sh@10 -- # set +x 00:09:01.485 06:48:05 -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:09:01.485 06:48:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:01.485 06:48:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:01.485 06:48:05 -- common/autotest_common.sh@10 -- # set +x 00:09:01.485 ************************************ 00:09:01.485 START TEST dd_invalid_oflag 00:09:01.485 ************************************ 00:09:01.485 06:48:05 -- common/autotest_common.sh@1114 -- # invalid_oflag 00:09:01.485 06:48:05 -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:09:01.485 06:48:05 -- common/autotest_common.sh@650 -- # local es=0 00:09:01.485 06:48:05 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:09:01.485 06:48:05 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:01.485 06:48:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:01.485 06:48:05 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:01.485 06:48:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:01.485 06:48:05 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:01.486 06:48:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:01.486 06:48:05 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:01.486 06:48:05 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:01.486 06:48:05 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:09:01.486 [2024-12-13 06:48:05.978725] spdk_dd.c:1503:main: *ERROR*: --oflags may be used only with --of 00:09:01.486 06:48:05 -- common/autotest_common.sh@653 -- # es=22 00:09:01.486 06:48:05 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:01.486 06:48:05 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:01.486 06:48:05 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:01.486 00:09:01.486 real 0m0.069s 00:09:01.486 user 0m0.038s 00:09:01.486 sys 0m0.030s 00:09:01.486 06:48:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:01.486 ************************************ 00:09:01.486 END TEST dd_invalid_oflag 00:09:01.486 ************************************ 00:09:01.486 06:48:05 -- common/autotest_common.sh@10 -- # set +x 00:09:01.745 06:48:06 -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:09:01.745 06:48:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:01.745 06:48:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:01.745 06:48:06 -- common/autotest_common.sh@10 -- # set +x 00:09:01.745 ************************************ 00:09:01.745 START TEST dd_invalid_iflag 00:09:01.745 ************************************ 00:09:01.745 06:48:06 -- common/autotest_common.sh@1114 -- # invalid_iflag 00:09:01.745 06:48:06 -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:09:01.745 06:48:06 -- common/autotest_common.sh@650 -- # local es=0 00:09:01.745 06:48:06 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:09:01.745 06:48:06 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:01.745 06:48:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:01.745 06:48:06 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:01.745 06:48:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:01.745 06:48:06 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:01.745 06:48:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:01.745 06:48:06 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:01.745 06:48:06 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:01.745 06:48:06 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:09:01.745 [2024-12-13 06:48:06.104158] spdk_dd.c:1509:main: *ERROR*: --iflags may be used only with --if 00:09:01.745 06:48:06 -- common/autotest_common.sh@653 -- # es=22 00:09:01.745 06:48:06 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:01.745 06:48:06 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:01.745 06:48:06 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:01.745 00:09:01.745 real 0m0.070s 00:09:01.745 user 0m0.037s 00:09:01.745 sys 0m0.032s 00:09:01.745 06:48:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:01.745 ************************************ 00:09:01.745 END TEST dd_invalid_iflag 00:09:01.745 06:48:06 -- common/autotest_common.sh@10 -- # set +x 00:09:01.745 ************************************ 00:09:01.745 06:48:06 -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:09:01.745 06:48:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:01.745 06:48:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:01.745 06:48:06 -- common/autotest_common.sh@10 -- # set +x 00:09:01.745 ************************************ 00:09:01.745 START TEST dd_unknown_flag 00:09:01.745 ************************************ 00:09:01.745 06:48:06 -- common/autotest_common.sh@1114 -- # unknown_flag 00:09:01.745 06:48:06 -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:09:01.745 06:48:06 -- common/autotest_common.sh@650 -- # local es=0 00:09:01.745 06:48:06 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:09:01.745 06:48:06 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:01.745 06:48:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:01.745 06:48:06 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:01.745 06:48:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:01.745 06:48:06 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:01.745 06:48:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:01.745 06:48:06 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:01.745 06:48:06 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:01.745 06:48:06 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:09:01.745 [2024-12-13 06:48:06.230811] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:01.745 [2024-12-13 06:48:06.230922] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71694 ] 00:09:02.004 [2024-12-13 06:48:06.372519] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.004 [2024-12-13 06:48:06.414911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.004 [2024-12-13 06:48:06.468066] spdk_dd.c: 985:parse_flags: *ERROR*: Unknown file flag: -1 00:09:02.004 [2024-12-13 06:48:06.468159] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:09:02.004 [2024-12-13 06:48:06.468175] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:09:02.004 [2024-12-13 06:48:06.468188] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:02.262 [2024-12-13 06:48:06.536918] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:09:02.262 06:48:06 -- common/autotest_common.sh@653 -- # es=236 00:09:02.262 06:48:06 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:02.262 06:48:06 -- common/autotest_common.sh@662 -- # es=108 00:09:02.262 06:48:06 -- common/autotest_common.sh@663 -- # case "$es" in 00:09:02.262 06:48:06 -- common/autotest_common.sh@670 -- # es=1 00:09:02.262 06:48:06 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:02.262 00:09:02.262 real 0m0.438s 00:09:02.262 user 0m0.223s 00:09:02.262 sys 0m0.110s 00:09:02.262 06:48:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:02.262 06:48:06 -- common/autotest_common.sh@10 -- # set +x 00:09:02.262 ************************************ 00:09:02.262 END TEST dd_unknown_flag 00:09:02.262 ************************************ 00:09:02.262 06:48:06 -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:09:02.262 06:48:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:02.262 06:48:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:02.262 06:48:06 -- common/autotest_common.sh@10 -- # set +x 00:09:02.262 ************************************ 00:09:02.262 START TEST dd_invalid_json 00:09:02.262 ************************************ 00:09:02.262 06:48:06 -- common/autotest_common.sh@1114 -- # invalid_json 00:09:02.263 06:48:06 -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:09:02.263 06:48:06 -- dd/negative_dd.sh@95 -- # : 00:09:02.263 06:48:06 -- common/autotest_common.sh@650 -- # local es=0 00:09:02.263 06:48:06 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:09:02.263 06:48:06 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:02.263 06:48:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:02.263 06:48:06 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:02.263 06:48:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:02.263 06:48:06 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:02.263 06:48:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:02.263 06:48:06 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:02.263 06:48:06 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:02.263 06:48:06 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:09:02.263 [2024-12-13 06:48:06.719318] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:02.263 [2024-12-13 06:48:06.719457] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71721 ] 00:09:02.521 [2024-12-13 06:48:06.860218] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.521 [2024-12-13 06:48:06.901398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.521 [2024-12-13 06:48:06.901527] json_config.c: 529:app_json_config_read: *ERROR*: Parsing JSON configuration failed (-2) 00:09:02.521 [2024-12-13 06:48:06.901552] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:02.521 [2024-12-13 06:48:06.901598] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:09:02.521 06:48:06 -- common/autotest_common.sh@653 -- # es=234 00:09:02.521 06:48:06 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:02.521 06:48:06 -- common/autotest_common.sh@662 -- # es=106 00:09:02.521 06:48:06 -- common/autotest_common.sh@663 -- # case "$es" in 00:09:02.521 06:48:06 -- common/autotest_common.sh@670 -- # es=1 00:09:02.521 06:48:06 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:02.521 00:09:02.521 real 0m0.317s 00:09:02.521 user 0m0.158s 00:09:02.521 sys 0m0.058s 00:09:02.521 06:48:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:02.521 06:48:06 -- common/autotest_common.sh@10 -- # set +x 00:09:02.521 ************************************ 00:09:02.521 END TEST dd_invalid_json 00:09:02.521 ************************************ 00:09:02.521 00:09:02.521 real 0m2.618s 00:09:02.521 user 0m1.282s 00:09:02.521 sys 0m0.954s 00:09:02.521 06:48:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:02.521 06:48:07 -- common/autotest_common.sh@10 -- # set +x 00:09:02.521 ************************************ 00:09:02.521 END TEST spdk_dd_negative 00:09:02.521 ************************************ 00:09:02.780 00:09:02.780 real 1m0.967s 00:09:02.780 user 0m36.781s 00:09:02.780 sys 0m15.079s 00:09:02.780 06:48:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:02.780 06:48:07 -- common/autotest_common.sh@10 -- # set +x 00:09:02.780 ************************************ 00:09:02.780 END TEST spdk_dd 00:09:02.780 ************************************ 00:09:02.780 06:48:07 -- spdk/autotest.sh@204 -- # '[' 0 -eq 1 ']' 00:09:02.780 06:48:07 -- spdk/autotest.sh@251 -- # '[' 0 -eq 1 ']' 00:09:02.780 06:48:07 -- spdk/autotest.sh@255 -- # timing_exit lib 00:09:02.780 06:48:07 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:02.780 06:48:07 -- common/autotest_common.sh@10 -- # set +x 00:09:02.780 06:48:07 -- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']' 00:09:02.780 06:48:07 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:09:02.780 06:48:07 -- spdk/autotest.sh@274 -- # '[' 1 -eq 1 ']' 00:09:02.780 06:48:07 -- spdk/autotest.sh@275 -- # export NET_TYPE 00:09:02.780 06:48:07 -- spdk/autotest.sh@278 -- # '[' tcp = rdma ']' 00:09:02.780 06:48:07 -- spdk/autotest.sh@281 -- # '[' tcp = tcp ']' 00:09:02.780 06:48:07 -- spdk/autotest.sh@282 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:02.780 06:48:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:02.780 06:48:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:02.780 06:48:07 -- common/autotest_common.sh@10 -- # set +x 00:09:02.780 ************************************ 00:09:02.780 START TEST nvmf_tcp 00:09:02.780 ************************************ 00:09:02.780 06:48:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:02.780 * Looking for test storage... 00:09:02.780 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:09:02.780 06:48:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:02.780 06:48:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:02.780 06:48:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:03.039 06:48:07 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:03.039 06:48:07 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:03.039 06:48:07 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:03.039 06:48:07 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:03.039 06:48:07 -- scripts/common.sh@335 -- # IFS=.-: 00:09:03.039 06:48:07 -- scripts/common.sh@335 -- # read -ra ver1 00:09:03.039 06:48:07 -- scripts/common.sh@336 -- # IFS=.-: 00:09:03.039 06:48:07 -- scripts/common.sh@336 -- # read -ra ver2 00:09:03.039 06:48:07 -- scripts/common.sh@337 -- # local 'op=<' 00:09:03.039 06:48:07 -- scripts/common.sh@339 -- # ver1_l=2 00:09:03.039 06:48:07 -- scripts/common.sh@340 -- # ver2_l=1 00:09:03.039 06:48:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:03.039 06:48:07 -- scripts/common.sh@343 -- # case "$op" in 00:09:03.039 06:48:07 -- scripts/common.sh@344 -- # : 1 00:09:03.039 06:48:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:03.039 06:48:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:03.039 06:48:07 -- scripts/common.sh@364 -- # decimal 1 00:09:03.039 06:48:07 -- scripts/common.sh@352 -- # local d=1 00:09:03.039 06:48:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:03.039 06:48:07 -- scripts/common.sh@354 -- # echo 1 00:09:03.039 06:48:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:03.039 06:48:07 -- scripts/common.sh@365 -- # decimal 2 00:09:03.039 06:48:07 -- scripts/common.sh@352 -- # local d=2 00:09:03.039 06:48:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:03.039 06:48:07 -- scripts/common.sh@354 -- # echo 2 00:09:03.039 06:48:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:03.039 06:48:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:03.039 06:48:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:03.039 06:48:07 -- scripts/common.sh@367 -- # return 0 00:09:03.039 06:48:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:03.039 06:48:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:03.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.039 --rc genhtml_branch_coverage=1 00:09:03.039 --rc genhtml_function_coverage=1 00:09:03.039 --rc genhtml_legend=1 00:09:03.039 --rc geninfo_all_blocks=1 00:09:03.039 --rc geninfo_unexecuted_blocks=1 00:09:03.039 00:09:03.039 ' 00:09:03.039 06:48:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:03.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.039 --rc genhtml_branch_coverage=1 00:09:03.039 --rc genhtml_function_coverage=1 00:09:03.039 --rc genhtml_legend=1 00:09:03.039 --rc geninfo_all_blocks=1 00:09:03.039 --rc geninfo_unexecuted_blocks=1 00:09:03.039 00:09:03.039 ' 00:09:03.039 06:48:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:03.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.039 --rc genhtml_branch_coverage=1 00:09:03.039 --rc genhtml_function_coverage=1 00:09:03.039 --rc genhtml_legend=1 00:09:03.039 --rc geninfo_all_blocks=1 00:09:03.039 --rc geninfo_unexecuted_blocks=1 00:09:03.039 00:09:03.039 ' 00:09:03.039 06:48:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:03.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.039 --rc genhtml_branch_coverage=1 00:09:03.039 --rc genhtml_function_coverage=1 00:09:03.039 --rc genhtml_legend=1 00:09:03.039 --rc geninfo_all_blocks=1 00:09:03.039 --rc geninfo_unexecuted_blocks=1 00:09:03.039 00:09:03.039 ' 00:09:03.039 06:48:07 -- nvmf/nvmf.sh@10 -- # uname -s 00:09:03.039 06:48:07 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:09:03.039 06:48:07 -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:03.039 06:48:07 -- nvmf/common.sh@7 -- # uname -s 00:09:03.039 06:48:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:03.039 06:48:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:03.039 06:48:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:03.039 06:48:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:03.039 06:48:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:03.039 06:48:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:03.039 06:48:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:03.039 06:48:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:03.039 06:48:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:03.039 06:48:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:03.039 06:48:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:657f0c9c-3891-4064-9841-3d87a573b6e7 00:09:03.039 06:48:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=657f0c9c-3891-4064-9841-3d87a573b6e7 00:09:03.039 06:48:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:03.039 06:48:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:03.039 06:48:07 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:03.039 06:48:07 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:03.039 06:48:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:03.039 06:48:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:03.039 06:48:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:03.039 06:48:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.039 06:48:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.039 06:48:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.039 06:48:07 -- paths/export.sh@5 -- # export PATH 00:09:03.039 06:48:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.039 06:48:07 -- nvmf/common.sh@46 -- # : 0 00:09:03.039 06:48:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:03.039 06:48:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:03.039 06:48:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:03.039 06:48:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:03.039 06:48:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:03.039 06:48:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:03.039 06:48:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:03.039 06:48:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:03.039 06:48:07 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:03.039 06:48:07 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:09:03.039 06:48:07 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:09:03.039 06:48:07 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:03.039 06:48:07 -- common/autotest_common.sh@10 -- # set +x 00:09:03.039 06:48:07 -- nvmf/nvmf.sh@22 -- # [[ 1 -eq 0 ]] 00:09:03.039 06:48:07 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:03.039 06:48:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:03.039 06:48:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:03.039 06:48:07 -- common/autotest_common.sh@10 -- # set +x 00:09:03.039 ************************************ 00:09:03.039 START TEST nvmf_host_management 00:09:03.039 ************************************ 00:09:03.039 06:48:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:03.039 * Looking for test storage... 00:09:03.039 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:03.040 06:48:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:03.040 06:48:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:03.040 06:48:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:03.040 06:48:07 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:03.040 06:48:07 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:03.040 06:48:07 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:03.040 06:48:07 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:03.040 06:48:07 -- scripts/common.sh@335 -- # IFS=.-: 00:09:03.040 06:48:07 -- scripts/common.sh@335 -- # read -ra ver1 00:09:03.040 06:48:07 -- scripts/common.sh@336 -- # IFS=.-: 00:09:03.040 06:48:07 -- scripts/common.sh@336 -- # read -ra ver2 00:09:03.040 06:48:07 -- scripts/common.sh@337 -- # local 'op=<' 00:09:03.040 06:48:07 -- scripts/common.sh@339 -- # ver1_l=2 00:09:03.040 06:48:07 -- scripts/common.sh@340 -- # ver2_l=1 00:09:03.040 06:48:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:03.040 06:48:07 -- scripts/common.sh@343 -- # case "$op" in 00:09:03.040 06:48:07 -- scripts/common.sh@344 -- # : 1 00:09:03.040 06:48:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:03.040 06:48:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:03.040 06:48:07 -- scripts/common.sh@364 -- # decimal 1 00:09:03.040 06:48:07 -- scripts/common.sh@352 -- # local d=1 00:09:03.040 06:48:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:03.040 06:48:07 -- scripts/common.sh@354 -- # echo 1 00:09:03.298 06:48:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:03.298 06:48:07 -- scripts/common.sh@365 -- # decimal 2 00:09:03.298 06:48:07 -- scripts/common.sh@352 -- # local d=2 00:09:03.298 06:48:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:03.298 06:48:07 -- scripts/common.sh@354 -- # echo 2 00:09:03.298 06:48:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:03.298 06:48:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:03.298 06:48:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:03.298 06:48:07 -- scripts/common.sh@367 -- # return 0 00:09:03.298 06:48:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:03.298 06:48:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:03.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.298 --rc genhtml_branch_coverage=1 00:09:03.298 --rc genhtml_function_coverage=1 00:09:03.298 --rc genhtml_legend=1 00:09:03.298 --rc geninfo_all_blocks=1 00:09:03.298 --rc geninfo_unexecuted_blocks=1 00:09:03.298 00:09:03.298 ' 00:09:03.298 06:48:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:03.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.298 --rc genhtml_branch_coverage=1 00:09:03.298 --rc genhtml_function_coverage=1 00:09:03.298 --rc genhtml_legend=1 00:09:03.298 --rc geninfo_all_blocks=1 00:09:03.298 --rc geninfo_unexecuted_blocks=1 00:09:03.298 00:09:03.298 ' 00:09:03.298 06:48:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:03.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.298 --rc genhtml_branch_coverage=1 00:09:03.298 --rc genhtml_function_coverage=1 00:09:03.298 --rc genhtml_legend=1 00:09:03.298 --rc geninfo_all_blocks=1 00:09:03.298 --rc geninfo_unexecuted_blocks=1 00:09:03.298 00:09:03.298 ' 00:09:03.298 06:48:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:03.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.298 --rc genhtml_branch_coverage=1 00:09:03.298 --rc genhtml_function_coverage=1 00:09:03.298 --rc genhtml_legend=1 00:09:03.298 --rc geninfo_all_blocks=1 00:09:03.298 --rc geninfo_unexecuted_blocks=1 00:09:03.298 00:09:03.298 ' 00:09:03.298 06:48:07 -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:03.298 06:48:07 -- nvmf/common.sh@7 -- # uname -s 00:09:03.298 06:48:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:03.298 06:48:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:03.298 06:48:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:03.298 06:48:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:03.298 06:48:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:03.298 06:48:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:03.298 06:48:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:03.298 06:48:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:03.298 06:48:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:03.298 06:48:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:03.298 06:48:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:657f0c9c-3891-4064-9841-3d87a573b6e7 00:09:03.298 06:48:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=657f0c9c-3891-4064-9841-3d87a573b6e7 00:09:03.298 06:48:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:03.298 06:48:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:03.298 06:48:07 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:03.298 06:48:07 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:03.298 06:48:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:03.298 06:48:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:03.298 06:48:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:03.299 06:48:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.299 06:48:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.299 06:48:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.299 06:48:07 -- paths/export.sh@5 -- # export PATH 00:09:03.299 06:48:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.299 06:48:07 -- nvmf/common.sh@46 -- # : 0 00:09:03.299 06:48:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:03.299 06:48:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:03.299 06:48:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:03.299 06:48:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:03.299 06:48:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:03.299 06:48:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:03.299 06:48:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:03.299 06:48:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:03.299 06:48:07 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:03.299 06:48:07 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:03.299 06:48:07 -- target/host_management.sh@104 -- # nvmftestinit 00:09:03.299 06:48:07 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:03.299 06:48:07 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:03.299 06:48:07 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:03.299 06:48:07 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:03.299 06:48:07 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:03.299 06:48:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:03.299 06:48:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:03.299 06:48:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:03.299 06:48:07 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:09:03.299 06:48:07 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:09:03.299 06:48:07 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:09:03.299 06:48:07 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:09:03.299 06:48:07 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:09:03.299 06:48:07 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:09:03.299 06:48:07 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:03.299 06:48:07 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:03.299 06:48:07 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:03.299 06:48:07 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:09:03.299 06:48:07 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:03.299 06:48:07 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:03.299 06:48:07 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:03.299 06:48:07 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:03.299 06:48:07 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:03.299 06:48:07 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:03.299 06:48:07 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:03.299 06:48:07 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:03.299 06:48:07 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:09:03.299 Cannot find device "nvmf_init_br" 00:09:03.299 06:48:07 -- nvmf/common.sh@153 -- # true 00:09:03.299 06:48:07 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:09:03.299 Cannot find device "nvmf_tgt_br" 00:09:03.299 06:48:07 -- nvmf/common.sh@154 -- # true 00:09:03.299 06:48:07 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:09:03.299 Cannot find device "nvmf_tgt_br2" 00:09:03.299 06:48:07 -- nvmf/common.sh@155 -- # true 00:09:03.299 06:48:07 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:09:03.299 Cannot find device "nvmf_init_br" 00:09:03.299 06:48:07 -- nvmf/common.sh@156 -- # true 00:09:03.299 06:48:07 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:09:03.299 Cannot find device "nvmf_tgt_br" 00:09:03.299 06:48:07 -- nvmf/common.sh@157 -- # true 00:09:03.299 06:48:07 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:09:03.299 Cannot find device "nvmf_tgt_br2" 00:09:03.299 06:48:07 -- nvmf/common.sh@158 -- # true 00:09:03.299 06:48:07 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:09:03.299 Cannot find device "nvmf_br" 00:09:03.299 06:48:07 -- nvmf/common.sh@159 -- # true 00:09:03.299 06:48:07 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:09:03.299 Cannot find device "nvmf_init_if" 00:09:03.299 06:48:07 -- nvmf/common.sh@160 -- # true 00:09:03.299 06:48:07 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:03.299 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:03.299 06:48:07 -- nvmf/common.sh@161 -- # true 00:09:03.299 06:48:07 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:03.299 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:03.299 06:48:07 -- nvmf/common.sh@162 -- # true 00:09:03.299 06:48:07 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:09:03.299 06:48:07 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:03.299 06:48:07 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:03.299 06:48:07 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:03.299 06:48:07 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:03.299 06:48:07 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:03.299 06:48:07 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:03.299 06:48:07 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:03.299 06:48:07 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:03.299 06:48:07 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:09:03.299 06:48:07 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:09:03.299 06:48:07 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:09:03.299 06:48:07 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:09:03.299 06:48:07 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:03.299 06:48:07 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:03.299 06:48:07 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:03.299 06:48:07 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:09:03.558 06:48:07 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:09:03.558 06:48:07 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:09:03.558 06:48:07 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:03.558 06:48:07 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:03.558 06:48:07 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:03.558 06:48:07 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:03.558 06:48:07 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:09:03.558 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:03.558 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:09:03.558 00:09:03.558 --- 10.0.0.2 ping statistics --- 00:09:03.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:03.558 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:09:03.558 06:48:07 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:09:03.558 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:03.558 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:09:03.558 00:09:03.558 --- 10.0.0.3 ping statistics --- 00:09:03.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:03.558 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:09:03.558 06:48:07 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:03.558 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:03.558 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:09:03.558 00:09:03.558 --- 10.0.0.1 ping statistics --- 00:09:03.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:03.558 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:09:03.558 06:48:07 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:03.558 06:48:07 -- nvmf/common.sh@421 -- # return 0 00:09:03.558 06:48:07 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:03.558 06:48:07 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:03.558 06:48:07 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:03.558 06:48:07 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:03.558 06:48:07 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:03.558 06:48:07 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:03.558 06:48:07 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:03.558 06:48:08 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:09:03.559 06:48:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:03.559 06:48:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:03.559 06:48:08 -- common/autotest_common.sh@10 -- # set +x 00:09:03.559 ************************************ 00:09:03.559 START TEST nvmf_host_management 00:09:03.559 ************************************ 00:09:03.559 06:48:08 -- common/autotest_common.sh@1114 -- # nvmf_host_management 00:09:03.559 06:48:08 -- target/host_management.sh@69 -- # starttarget 00:09:03.559 06:48:08 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:09:03.559 06:48:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:03.559 06:48:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:03.559 06:48:08 -- common/autotest_common.sh@10 -- # set +x 00:09:03.559 06:48:08 -- nvmf/common.sh@469 -- # nvmfpid=71999 00:09:03.559 06:48:08 -- nvmf/common.sh@470 -- # waitforlisten 71999 00:09:03.559 06:48:08 -- common/autotest_common.sh@829 -- # '[' -z 71999 ']' 00:09:03.559 06:48:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:03.559 06:48:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:03.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:03.559 06:48:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:03.559 06:48:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:03.559 06:48:08 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:09:03.559 06:48:08 -- common/autotest_common.sh@10 -- # set +x 00:09:03.559 [2024-12-13 06:48:08.072643] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:03.559 [2024-12-13 06:48:08.072731] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:03.816 [2024-12-13 06:48:08.214045] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:03.816 [2024-12-13 06:48:08.255678] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:03.816 [2024-12-13 06:48:08.255870] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:03.817 [2024-12-13 06:48:08.255887] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:03.817 [2024-12-13 06:48:08.255898] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:03.817 [2024-12-13 06:48:08.256304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:03.817 [2024-12-13 06:48:08.256686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:03.817 [2024-12-13 06:48:08.256808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:09:03.817 [2024-12-13 06:48:08.256821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:04.753 06:48:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:04.753 06:48:09 -- common/autotest_common.sh@862 -- # return 0 00:09:04.753 06:48:09 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:04.753 06:48:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:04.753 06:48:09 -- common/autotest_common.sh@10 -- # set +x 00:09:04.753 06:48:09 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:04.753 06:48:09 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:04.753 06:48:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.753 06:48:09 -- common/autotest_common.sh@10 -- # set +x 00:09:04.753 [2024-12-13 06:48:09.124990] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:04.753 06:48:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.753 06:48:09 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:09:04.753 06:48:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:04.753 06:48:09 -- common/autotest_common.sh@10 -- # set +x 00:09:04.753 06:48:09 -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:09:04.753 06:48:09 -- target/host_management.sh@23 -- # cat 00:09:04.753 06:48:09 -- target/host_management.sh@30 -- # rpc_cmd 00:09:04.753 06:48:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.753 06:48:09 -- common/autotest_common.sh@10 -- # set +x 00:09:04.753 Malloc0 00:09:04.753 [2024-12-13 06:48:09.203496] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:04.753 06:48:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.753 06:48:09 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:09:04.753 06:48:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:04.753 06:48:09 -- common/autotest_common.sh@10 -- # set +x 00:09:04.753 06:48:09 -- target/host_management.sh@73 -- # perfpid=72053 00:09:04.753 06:48:09 -- target/host_management.sh@74 -- # waitforlisten 72053 /var/tmp/bdevperf.sock 00:09:04.753 06:48:09 -- common/autotest_common.sh@829 -- # '[' -z 72053 ']' 00:09:04.753 06:48:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:04.753 06:48:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:04.753 06:48:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:04.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:04.753 06:48:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:04.753 06:48:09 -- common/autotest_common.sh@10 -- # set +x 00:09:04.753 06:48:09 -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:09:04.753 06:48:09 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:09:04.753 06:48:09 -- nvmf/common.sh@520 -- # config=() 00:09:04.753 06:48:09 -- nvmf/common.sh@520 -- # local subsystem config 00:09:04.753 06:48:09 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:09:04.753 06:48:09 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:09:04.753 { 00:09:04.753 "params": { 00:09:04.753 "name": "Nvme$subsystem", 00:09:04.753 "trtype": "$TEST_TRANSPORT", 00:09:04.753 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:04.753 "adrfam": "ipv4", 00:09:04.753 "trsvcid": "$NVMF_PORT", 00:09:04.753 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:04.754 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:04.754 "hdgst": ${hdgst:-false}, 00:09:04.754 "ddgst": ${ddgst:-false} 00:09:04.754 }, 00:09:04.754 "method": "bdev_nvme_attach_controller" 00:09:04.754 } 00:09:04.754 EOF 00:09:04.754 )") 00:09:04.754 06:48:09 -- nvmf/common.sh@542 -- # cat 00:09:04.754 06:48:09 -- nvmf/common.sh@544 -- # jq . 00:09:04.754 06:48:09 -- nvmf/common.sh@545 -- # IFS=, 00:09:04.754 06:48:09 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:09:04.754 "params": { 00:09:04.754 "name": "Nvme0", 00:09:04.754 "trtype": "tcp", 00:09:04.754 "traddr": "10.0.0.2", 00:09:04.754 "adrfam": "ipv4", 00:09:04.754 "trsvcid": "4420", 00:09:04.754 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:04.754 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:04.754 "hdgst": false, 00:09:04.754 "ddgst": false 00:09:04.754 }, 00:09:04.754 "method": "bdev_nvme_attach_controller" 00:09:04.754 }' 00:09:05.013 [2024-12-13 06:48:09.297923] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:05.013 [2024-12-13 06:48:09.298013] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72053 ] 00:09:05.013 [2024-12-13 06:48:09.439735] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.013 [2024-12-13 06:48:09.479379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.271 Running I/O for 10 seconds... 00:09:05.839 06:48:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:05.839 06:48:10 -- common/autotest_common.sh@862 -- # return 0 00:09:05.839 06:48:10 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:09:05.840 06:48:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.840 06:48:10 -- common/autotest_common.sh@10 -- # set +x 00:09:05.840 06:48:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.840 06:48:10 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:05.840 06:48:10 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:09:05.840 06:48:10 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:09:05.840 06:48:10 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:09:05.840 06:48:10 -- target/host_management.sh@52 -- # local ret=1 00:09:05.840 06:48:10 -- target/host_management.sh@53 -- # local i 00:09:05.840 06:48:10 -- target/host_management.sh@54 -- # (( i = 10 )) 00:09:05.840 06:48:10 -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:05.840 06:48:10 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:05.840 06:48:10 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:05.840 06:48:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.840 06:48:10 -- common/autotest_common.sh@10 -- # set +x 00:09:06.100 06:48:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.100 06:48:10 -- target/host_management.sh@55 -- # read_io_count=2041 00:09:06.100 06:48:10 -- target/host_management.sh@58 -- # '[' 2041 -ge 100 ']' 00:09:06.100 06:48:10 -- target/host_management.sh@59 -- # ret=0 00:09:06.100 06:48:10 -- target/host_management.sh@60 -- # break 00:09:06.100 06:48:10 -- target/host_management.sh@64 -- # return 0 00:09:06.100 06:48:10 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:06.100 06:48:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.100 06:48:10 -- common/autotest_common.sh@10 -- # set +x 00:09:06.100 [2024-12-13 06:48:10.392059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:09:06.100 [2024-12-13 06:48:10.392110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.100 [2024-12-13 06:48:10.392126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:09:06.100 [2024-12-13 06:48:10.392136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.100 [2024-12-13 06:48:10.392146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:09:06.100 [2024-12-13 06:48:10.392156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.100 [2024-12-13 06:48:10.392166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:09:06.100 [2024-12-13 06:48:10.392175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.100 [2024-12-13 06:48:10.392185] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebada0 is same with the state(5) to be set 00:09:06.100 [2024-12-13 06:48:10.392448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.100 [2024-12-13 06:48:10.392468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.100 [2024-12-13 06:48:10.392487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.100 [2024-12-13 06:48:10.392501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.100 [2024-12-13 06:48:10.392514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.100 [2024-12-13 06:48:10.392523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.100 [2024-12-13 06:48:10.392534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.100 [2024-12-13 06:48:10.392544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.100 [2024-12-13 06:48:10.392555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.100 [2024-12-13 06:48:10.392565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.100 [2024-12-13 06:48:10.392576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.100 [2024-12-13 06:48:10.392585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.100 [2024-12-13 06:48:10.392596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.100 [2024-12-13 06:48:10.392605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.100 [2024-12-13 06:48:10.392616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.100 [2024-12-13 06:48:10.392625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.100 [2024-12-13 06:48:10.392636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.100 [2024-12-13 06:48:10.392647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.100 [2024-12-13 06:48:10.392659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.100 [2024-12-13 06:48:10.392668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.100 [2024-12-13 06:48:10.392679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.100 [2024-12-13 06:48:10.392688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.100 [2024-12-13 06:48:10.392699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.100 [2024-12-13 06:48:10.392708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.101 [2024-12-13 06:48:10.392719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.101 [2024-12-13 06:48:10.392728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.101 [2024-12-13 06:48:10.392739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.101 [2024-12-13 06:48:10.392748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.101 [2024-12-13 06:48:10.392759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.101 [2024-12-13 06:48:10.392768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.101 [2024-12-13 06:48:10.392779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.101 [2024-12-13 06:48:10.392788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.101 [2024-12-13 06:48:10.392799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.101 [2024-12-13 06:48:10.392808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.101 [2024-12-13 06:48:10.392819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.101 [2024-12-13 06:48:10.392830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.101 [2024-12-13 06:48:10.392842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.101 [2024-12-13 06:48:10.392851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.101 [2024-12-13 06:48:10.392864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.101 [2024-12-13 06:48:10.392873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.101 [2024-12-13 06:48:10.392884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.101 [2024-12-13 06:48:10.392893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.101 [2024-12-13 06:48:10.392904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.101 [2024-12-13 06:48:10.392913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.101 [2024-12-13 06:48:10.392924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.101 [2024-12-13 06:48:10.392932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.101 [2024-12-13 06:48:10.392943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.101 [2024-12-13 06:48:10.392952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.101 [2024-12-13 06:48:10.392963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.101 [2024-12-13 06:48:10.392974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.101 [2024-12-13 06:48:10.392986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.101 [2024-12-13 06:48:10.392995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.101 [2024-12-13 06:48:10.393005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.101 [2024-12-13 06:48:10.393015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.101 [2024-12-13 06:48:10.393025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.101 [2024-12-13 06:48:10.393035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.101 [2024-12-13 06:48:10.393045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.101 [2024-12-13 06:48:10.393054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.101 [2024-12-13 06:48:10.393065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.101 [2024-12-13 06:48:10.393074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.101 [2024-12-13 06:48:10.393085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.101 [2024-12-13 06:48:10.393094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.101 [2024-12-13 06:48:10.393105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.101 [2024-12-13 06:48:10.393114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.101 [2024-12-13 06:48:10.393125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.101 [2024-12-13 06:48:10.393134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.101 06:48:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.101 [2024-12-13 06:48:10.393145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.101 [2024-12-13 06:48:10.393156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.101 [2024-12-13 06:48:10.393168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.101 [2024-12-13 06:48:10.393177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.101 [2024-12-13 06:48:10.393188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.101 [2024-12-13 06:48:10.393197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.101 [2024-12-13 06:48:10.393208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.101 [2024-12-13 06:48:10.393217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.101 [2024-12-13 06:48:10.393228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.101 [2024-12-13 06:48:10.393237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.101 [2024-12-13 06:48:10.393248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.101 [2024-12-13 06:48:10.393256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.101 [2024-12-13 06:48:10.393267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.101 [2024-12-13 06:48:10.393276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.101 [2024-12-13 06:48:10.393287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.101 [2024-12-13 06:48:10.393298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.101 [2024-12-13 06:48:10.393309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.101 [2024-12-13 06:48:10.393318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.101 [2024-12-13 06:48:10.393330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.101 [2024-12-13 06:48:10.393339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.101 [2024-12-13 06:48:10.393364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.101 [2024-12-13 06:48:10.393376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.101 [2024-12-13 06:48:10.393387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.101 [2024-12-13 06:48:10.393397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.101 06:48:10 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:06.101 [2024-12-13 06:48:10.393408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.101 [2024-12-13 06:48:10.393417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.101 [2024-12-13 06:48:10.393428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.101 [2024-12-13 06:48:10.393437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.101 [2024-12-13 06:48:10.393447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.101 [2024-12-13 06:48:10.393456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.101 [2024-12-13 06:48:10.393467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.101 [2024-12-13 06:48:10.393478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.101 [2024-12-13 06:48:10.393489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.101 [2024-12-13 06:48:10.393500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.101 [2024-12-13 06:48:10.393512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.101 [2024-12-13 06:48:10.393521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.101 [2024-12-13 06:48:10.393532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.101 [2024-12-13 06:48:10.393541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.102 [2024-12-13 06:48:10.393551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.102 [2024-12-13 06:48:10.393561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.102 [2024-12-13 06:48:10.393572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.102 [2024-12-13 06:48:10.393581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.102 [2024-12-13 06:48:10.393592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.102 [2024-12-13 06:48:10.393601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.102 06:48:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.102 [2024-12-13 06:48:10.393612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.102 [2024-12-13 06:48:10.393621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.102 [2024-12-13 06:48:10.393632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.102 [2024-12-13 06:48:10.393643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.102 [2024-12-13 06:48:10.393654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.102 [2024-12-13 06:48:10.393664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.102 [2024-12-13 06:48:10.393675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.102 [2024-12-13 06:48:10.393684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.102 [2024-12-13 06:48:10.393695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.102 [2024-12-13 06:48:10.393704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.102 [2024-12-13 06:48:10.393714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.102 [2024-12-13 06:48:10.393723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.102 [2024-12-13 06:48:10.393734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.102 [2024-12-13 06:48:10.393743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.102 [2024-12-13 06:48:10.393755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.102 [2024-12-13 06:48:10.393763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.102 [2024-12-13 06:48:10.393774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.102 06:48:10 -- common/autotest_common.sh@10 -- # set +x 00:09:06.102 [2024-12-13 06:48:10.393783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.102 [2024-12-13 06:48:10.393793] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb9460 is same with the state(5) to be set 00:09:06.102 [2024-12-13 06:48:10.393839] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1eb9460 was disconnected and freed. reset controller. 00:09:06.102 [2024-12-13 06:48:10.395006] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:09:06.102 task offset: 18048 on job bdev=Nvme0n1 fails 00:09:06.102 00:09:06.102 Latency(us) 00:09:06.102 [2024-12-13T06:48:10.621Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:06.102 [2024-12-13T06:48:10.621Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:06.102 [2024-12-13T06:48:10.621Z] Job: Nvme0n1 ended in about 0.78 seconds with error 00:09:06.102 Verification LBA range: start 0x0 length 0x400 00:09:06.102 Nvme0n1 : 0.78 2793.53 174.60 82.35 0.00 21896.89 1757.56 27525.12 00:09:06.102 [2024-12-13T06:48:10.621Z] =================================================================================================================== 00:09:06.102 [2024-12-13T06:48:10.621Z] Total : 2793.53 174.60 82.35 0.00 21896.89 1757.56 27525.12 00:09:06.102 [2024-12-13 06:48:10.397089] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:06.102 [2024-12-13 06:48:10.397128] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ebada0 (9): Bad file descriptor 00:09:06.102 06:48:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.102 06:48:10 -- target/host_management.sh@87 -- # sleep 1 00:09:06.102 [2024-12-13 06:48:10.406007] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:07.038 06:48:11 -- target/host_management.sh@91 -- # kill -9 72053 00:09:07.038 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (72053) - No such process 00:09:07.038 06:48:11 -- target/host_management.sh@91 -- # true 00:09:07.038 06:48:11 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:09:07.038 06:48:11 -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:09:07.038 06:48:11 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:09:07.038 06:48:11 -- nvmf/common.sh@520 -- # config=() 00:09:07.038 06:48:11 -- nvmf/common.sh@520 -- # local subsystem config 00:09:07.038 06:48:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:09:07.038 06:48:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:09:07.038 { 00:09:07.038 "params": { 00:09:07.038 "name": "Nvme$subsystem", 00:09:07.038 "trtype": "$TEST_TRANSPORT", 00:09:07.038 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:07.038 "adrfam": "ipv4", 00:09:07.038 "trsvcid": "$NVMF_PORT", 00:09:07.038 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:07.038 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:07.038 "hdgst": ${hdgst:-false}, 00:09:07.038 "ddgst": ${ddgst:-false} 00:09:07.038 }, 00:09:07.038 "method": "bdev_nvme_attach_controller" 00:09:07.038 } 00:09:07.038 EOF 00:09:07.038 )") 00:09:07.038 06:48:11 -- nvmf/common.sh@542 -- # cat 00:09:07.038 06:48:11 -- nvmf/common.sh@544 -- # jq . 00:09:07.038 06:48:11 -- nvmf/common.sh@545 -- # IFS=, 00:09:07.038 06:48:11 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:09:07.038 "params": { 00:09:07.038 "name": "Nvme0", 00:09:07.038 "trtype": "tcp", 00:09:07.038 "traddr": "10.0.0.2", 00:09:07.038 "adrfam": "ipv4", 00:09:07.038 "trsvcid": "4420", 00:09:07.038 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:07.038 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:07.038 "hdgst": false, 00:09:07.038 "ddgst": false 00:09:07.038 }, 00:09:07.038 "method": "bdev_nvme_attach_controller" 00:09:07.038 }' 00:09:07.038 [2024-12-13 06:48:11.458741] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:07.038 [2024-12-13 06:48:11.458844] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72091 ] 00:09:07.297 [2024-12-13 06:48:11.599439] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.297 [2024-12-13 06:48:11.636146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.297 Running I/O for 1 seconds... 00:09:08.674 00:09:08.674 Latency(us) 00:09:08.674 [2024-12-13T06:48:13.193Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:08.674 [2024-12-13T06:48:13.193Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:08.674 Verification LBA range: start 0x0 length 0x400 00:09:08.674 Nvme0n1 : 1.02 2911.01 181.94 0.00 0.00 21631.75 1295.83 28835.84 00:09:08.674 [2024-12-13T06:48:13.193Z] =================================================================================================================== 00:09:08.674 [2024-12-13T06:48:13.193Z] Total : 2911.01 181.94 0.00 0.00 21631.75 1295.83 28835.84 00:09:08.674 06:48:12 -- target/host_management.sh@101 -- # stoptarget 00:09:08.674 06:48:12 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:09:08.674 06:48:12 -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:09:08.674 06:48:12 -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:09:08.674 06:48:12 -- target/host_management.sh@40 -- # nvmftestfini 00:09:08.674 06:48:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:08.674 06:48:12 -- nvmf/common.sh@116 -- # sync 00:09:08.674 06:48:13 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:09:08.674 06:48:13 -- nvmf/common.sh@119 -- # set +e 00:09:08.674 06:48:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:08.674 06:48:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:09:08.674 rmmod nvme_tcp 00:09:08.674 rmmod nvme_fabrics 00:09:08.674 rmmod nvme_keyring 00:09:08.674 06:48:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:08.674 06:48:13 -- nvmf/common.sh@123 -- # set -e 00:09:08.674 06:48:13 -- nvmf/common.sh@124 -- # return 0 00:09:08.674 06:48:13 -- nvmf/common.sh@477 -- # '[' -n 71999 ']' 00:09:08.674 06:48:13 -- nvmf/common.sh@478 -- # killprocess 71999 00:09:08.674 06:48:13 -- common/autotest_common.sh@936 -- # '[' -z 71999 ']' 00:09:08.674 06:48:13 -- common/autotest_common.sh@940 -- # kill -0 71999 00:09:08.674 06:48:13 -- common/autotest_common.sh@941 -- # uname 00:09:08.674 06:48:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:08.674 06:48:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71999 00:09:08.674 06:48:13 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:09:08.674 06:48:13 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:09:08.674 killing process with pid 71999 00:09:08.674 06:48:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71999' 00:09:08.674 06:48:13 -- common/autotest_common.sh@955 -- # kill 71999 00:09:08.674 06:48:13 -- common/autotest_common.sh@960 -- # wait 71999 00:09:08.934 [2024-12-13 06:48:13.242484] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:09:08.934 06:48:13 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:08.934 06:48:13 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:09:08.934 06:48:13 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:09:08.934 06:48:13 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:08.934 06:48:13 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:09:08.934 06:48:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:08.934 06:48:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:08.934 06:48:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:08.934 06:48:13 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:09:08.934 00:09:08.934 real 0m5.285s 00:09:08.934 user 0m22.459s 00:09:08.934 sys 0m1.237s 00:09:08.934 06:48:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:08.934 06:48:13 -- common/autotest_common.sh@10 -- # set +x 00:09:08.934 ************************************ 00:09:08.934 END TEST nvmf_host_management 00:09:08.934 ************************************ 00:09:08.934 06:48:13 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:09:08.934 00:09:08.934 real 0m5.958s 00:09:08.934 user 0m22.669s 00:09:08.934 sys 0m1.486s 00:09:08.934 06:48:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:08.934 06:48:13 -- common/autotest_common.sh@10 -- # set +x 00:09:08.934 ************************************ 00:09:08.934 END TEST nvmf_host_management 00:09:08.934 ************************************ 00:09:08.934 06:48:13 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:08.934 06:48:13 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:08.934 06:48:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:08.934 06:48:13 -- common/autotest_common.sh@10 -- # set +x 00:09:08.934 ************************************ 00:09:08.934 START TEST nvmf_lvol 00:09:08.934 ************************************ 00:09:08.934 06:48:13 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:09.194 * Looking for test storage... 00:09:09.194 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:09.194 06:48:13 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:09.194 06:48:13 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:09.194 06:48:13 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:09.194 06:48:13 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:09.194 06:48:13 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:09.194 06:48:13 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:09.194 06:48:13 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:09.194 06:48:13 -- scripts/common.sh@335 -- # IFS=.-: 00:09:09.194 06:48:13 -- scripts/common.sh@335 -- # read -ra ver1 00:09:09.194 06:48:13 -- scripts/common.sh@336 -- # IFS=.-: 00:09:09.194 06:48:13 -- scripts/common.sh@336 -- # read -ra ver2 00:09:09.194 06:48:13 -- scripts/common.sh@337 -- # local 'op=<' 00:09:09.194 06:48:13 -- scripts/common.sh@339 -- # ver1_l=2 00:09:09.194 06:48:13 -- scripts/common.sh@340 -- # ver2_l=1 00:09:09.194 06:48:13 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:09.194 06:48:13 -- scripts/common.sh@343 -- # case "$op" in 00:09:09.194 06:48:13 -- scripts/common.sh@344 -- # : 1 00:09:09.194 06:48:13 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:09.194 06:48:13 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:09.194 06:48:13 -- scripts/common.sh@364 -- # decimal 1 00:09:09.194 06:48:13 -- scripts/common.sh@352 -- # local d=1 00:09:09.194 06:48:13 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:09.194 06:48:13 -- scripts/common.sh@354 -- # echo 1 00:09:09.194 06:48:13 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:09.194 06:48:13 -- scripts/common.sh@365 -- # decimal 2 00:09:09.194 06:48:13 -- scripts/common.sh@352 -- # local d=2 00:09:09.194 06:48:13 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:09.194 06:48:13 -- scripts/common.sh@354 -- # echo 2 00:09:09.194 06:48:13 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:09.194 06:48:13 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:09.194 06:48:13 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:09.194 06:48:13 -- scripts/common.sh@367 -- # return 0 00:09:09.194 06:48:13 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:09.194 06:48:13 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:09.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.194 --rc genhtml_branch_coverage=1 00:09:09.194 --rc genhtml_function_coverage=1 00:09:09.194 --rc genhtml_legend=1 00:09:09.194 --rc geninfo_all_blocks=1 00:09:09.194 --rc geninfo_unexecuted_blocks=1 00:09:09.194 00:09:09.194 ' 00:09:09.194 06:48:13 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:09.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.194 --rc genhtml_branch_coverage=1 00:09:09.194 --rc genhtml_function_coverage=1 00:09:09.194 --rc genhtml_legend=1 00:09:09.194 --rc geninfo_all_blocks=1 00:09:09.194 --rc geninfo_unexecuted_blocks=1 00:09:09.194 00:09:09.194 ' 00:09:09.194 06:48:13 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:09.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.194 --rc genhtml_branch_coverage=1 00:09:09.194 --rc genhtml_function_coverage=1 00:09:09.194 --rc genhtml_legend=1 00:09:09.194 --rc geninfo_all_blocks=1 00:09:09.194 --rc geninfo_unexecuted_blocks=1 00:09:09.194 00:09:09.194 ' 00:09:09.194 06:48:13 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:09.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.194 --rc genhtml_branch_coverage=1 00:09:09.194 --rc genhtml_function_coverage=1 00:09:09.194 --rc genhtml_legend=1 00:09:09.194 --rc geninfo_all_blocks=1 00:09:09.194 --rc geninfo_unexecuted_blocks=1 00:09:09.194 00:09:09.194 ' 00:09:09.194 06:48:13 -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:09.194 06:48:13 -- nvmf/common.sh@7 -- # uname -s 00:09:09.194 06:48:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:09.194 06:48:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:09.194 06:48:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:09.194 06:48:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:09.194 06:48:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:09.194 06:48:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:09.194 06:48:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:09.194 06:48:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:09.194 06:48:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:09.194 06:48:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:09.194 06:48:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:657f0c9c-3891-4064-9841-3d87a573b6e7 00:09:09.194 06:48:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=657f0c9c-3891-4064-9841-3d87a573b6e7 00:09:09.194 06:48:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:09.194 06:48:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:09.194 06:48:13 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:09.194 06:48:13 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:09.194 06:48:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:09.194 06:48:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:09.194 06:48:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:09.194 06:48:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.194 06:48:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.194 06:48:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.194 06:48:13 -- paths/export.sh@5 -- # export PATH 00:09:09.194 06:48:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.194 06:48:13 -- nvmf/common.sh@46 -- # : 0 00:09:09.194 06:48:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:09.194 06:48:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:09.194 06:48:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:09.194 06:48:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:09.194 06:48:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:09.194 06:48:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:09.194 06:48:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:09.194 06:48:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:09.194 06:48:13 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:09.194 06:48:13 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:09.194 06:48:13 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:09:09.194 06:48:13 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:09:09.194 06:48:13 -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:09.194 06:48:13 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:09:09.194 06:48:13 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:09.194 06:48:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:09.194 06:48:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:09.194 06:48:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:09.194 06:48:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:09.194 06:48:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:09.194 06:48:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:09.194 06:48:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.194 06:48:13 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:09:09.194 06:48:13 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:09:09.194 06:48:13 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:09:09.194 06:48:13 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:09:09.194 06:48:13 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:09:09.194 06:48:13 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:09:09.194 06:48:13 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:09.194 06:48:13 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:09.194 06:48:13 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:09.194 06:48:13 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:09:09.194 06:48:13 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:09.194 06:48:13 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:09.194 06:48:13 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:09.194 06:48:13 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:09.194 06:48:13 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:09.194 06:48:13 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:09.194 06:48:13 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:09.194 06:48:13 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:09.194 06:48:13 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:09:09.194 06:48:13 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:09:09.195 Cannot find device "nvmf_tgt_br" 00:09:09.195 06:48:13 -- nvmf/common.sh@154 -- # true 00:09:09.195 06:48:13 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:09:09.195 Cannot find device "nvmf_tgt_br2" 00:09:09.195 06:48:13 -- nvmf/common.sh@155 -- # true 00:09:09.195 06:48:13 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:09:09.195 06:48:13 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:09:09.195 Cannot find device "nvmf_tgt_br" 00:09:09.195 06:48:13 -- nvmf/common.sh@157 -- # true 00:09:09.195 06:48:13 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:09:09.195 Cannot find device "nvmf_tgt_br2" 00:09:09.195 06:48:13 -- nvmf/common.sh@158 -- # true 00:09:09.195 06:48:13 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:09:09.195 06:48:13 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:09:09.195 06:48:13 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:09.195 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:09.195 06:48:13 -- nvmf/common.sh@161 -- # true 00:09:09.195 06:48:13 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:09.195 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:09.454 06:48:13 -- nvmf/common.sh@162 -- # true 00:09:09.454 06:48:13 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:09:09.454 06:48:13 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:09.454 06:48:13 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:09.454 06:48:13 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:09.454 06:48:13 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:09.454 06:48:13 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:09.454 06:48:13 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:09.454 06:48:13 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:09.454 06:48:13 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:09.454 06:48:13 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:09:09.454 06:48:13 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:09:09.454 06:48:13 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:09:09.454 06:48:13 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:09:09.454 06:48:13 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:09.454 06:48:13 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:09.454 06:48:13 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:09.454 06:48:13 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:09:09.454 06:48:13 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:09:09.454 06:48:13 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:09:09.454 06:48:13 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:09.454 06:48:13 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:09.454 06:48:13 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:09.454 06:48:13 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:09.454 06:48:13 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:09:09.454 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:09.454 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:09:09.454 00:09:09.454 --- 10.0.0.2 ping statistics --- 00:09:09.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.454 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:09:09.454 06:48:13 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:09:09.454 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:09.454 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:09:09.454 00:09:09.454 --- 10.0.0.3 ping statistics --- 00:09:09.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.454 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:09:09.454 06:48:13 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:09.454 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:09.454 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:09:09.454 00:09:09.454 --- 10.0.0.1 ping statistics --- 00:09:09.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.454 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:09:09.454 06:48:13 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:09.454 06:48:13 -- nvmf/common.sh@421 -- # return 0 00:09:09.454 06:48:13 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:09.454 06:48:13 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:09.454 06:48:13 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:09.454 06:48:13 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:09.454 06:48:13 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:09.454 06:48:13 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:09.454 06:48:13 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:09.454 06:48:13 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:09:09.454 06:48:13 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:09.454 06:48:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:09.454 06:48:13 -- common/autotest_common.sh@10 -- # set +x 00:09:09.454 06:48:13 -- nvmf/common.sh@469 -- # nvmfpid=72318 00:09:09.454 06:48:13 -- nvmf/common.sh@470 -- # waitforlisten 72318 00:09:09.454 06:48:13 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:09:09.454 06:48:13 -- common/autotest_common.sh@829 -- # '[' -z 72318 ']' 00:09:09.454 06:48:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:09.454 06:48:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:09.454 06:48:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:09.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:09.454 06:48:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:09.454 06:48:13 -- common/autotest_common.sh@10 -- # set +x 00:09:09.454 [2024-12-13 06:48:13.967796] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:09.454 [2024-12-13 06:48:13.967896] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:09.713 [2024-12-13 06:48:14.111346] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:09.713 [2024-12-13 06:48:14.151628] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:09.713 [2024-12-13 06:48:14.151805] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:09.713 [2024-12-13 06:48:14.151822] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:09.713 [2024-12-13 06:48:14.151833] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:09.713 [2024-12-13 06:48:14.152468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:09.713 [2024-12-13 06:48:14.152538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:09.713 [2024-12-13 06:48:14.152547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.651 06:48:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:10.651 06:48:15 -- common/autotest_common.sh@862 -- # return 0 00:09:10.651 06:48:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:10.651 06:48:15 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:10.651 06:48:15 -- common/autotest_common.sh@10 -- # set +x 00:09:10.651 06:48:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:10.651 06:48:15 -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:10.911 [2024-12-13 06:48:15.320053] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:10.911 06:48:15 -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:11.170 06:48:15 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:09:11.170 06:48:15 -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:11.429 06:48:15 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:09:11.429 06:48:15 -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:09:11.687 06:48:16 -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:09:11.945 06:48:16 -- target/nvmf_lvol.sh@29 -- # lvs=9a68ed3a-ac69-4303-b968-0e561846e94a 00:09:11.945 06:48:16 -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 9a68ed3a-ac69-4303-b968-0e561846e94a lvol 20 00:09:12.204 06:48:16 -- target/nvmf_lvol.sh@32 -- # lvol=4a2c6270-0a9b-498d-b8c0-0aec5dddc385 00:09:12.204 06:48:16 -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:12.462 06:48:16 -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4a2c6270-0a9b-498d-b8c0-0aec5dddc385 00:09:12.720 06:48:17 -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:12.978 [2024-12-13 06:48:17.400849] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:12.978 06:48:17 -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:13.237 06:48:17 -- target/nvmf_lvol.sh@42 -- # perf_pid=72399 00:09:13.237 06:48:17 -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:09:13.237 06:48:17 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:09:14.613 06:48:18 -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 4a2c6270-0a9b-498d-b8c0-0aec5dddc385 MY_SNAPSHOT 00:09:14.613 06:48:18 -- target/nvmf_lvol.sh@47 -- # snapshot=dcbce8dd-7ed7-4d08-958e-30dcb4849599 00:09:14.613 06:48:18 -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 4a2c6270-0a9b-498d-b8c0-0aec5dddc385 30 00:09:14.871 06:48:19 -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone dcbce8dd-7ed7-4d08-958e-30dcb4849599 MY_CLONE 00:09:15.183 06:48:19 -- target/nvmf_lvol.sh@49 -- # clone=ee463776-72bb-4b90-9ff6-101a2580867f 00:09:15.183 06:48:19 -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate ee463776-72bb-4b90-9ff6-101a2580867f 00:09:15.765 06:48:20 -- target/nvmf_lvol.sh@53 -- # wait 72399 00:09:23.873 Initializing NVMe Controllers 00:09:23.873 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:23.873 Controller IO queue size 128, less than required. 00:09:23.874 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:23.874 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:09:23.874 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:09:23.874 Initialization complete. Launching workers. 00:09:23.874 ======================================================== 00:09:23.874 Latency(us) 00:09:23.874 Device Information : IOPS MiB/s Average min max 00:09:23.874 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9244.35 36.11 13858.41 1476.75 60977.84 00:09:23.874 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 9161.26 35.79 13972.79 991.03 55137.19 00:09:23.874 ======================================================== 00:09:23.874 Total : 18405.61 71.90 13915.34 991.03 60977.84 00:09:23.874 00:09:23.874 06:48:27 -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:23.874 06:48:28 -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 4a2c6270-0a9b-498d-b8c0-0aec5dddc385 00:09:24.132 06:48:28 -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9a68ed3a-ac69-4303-b968-0e561846e94a 00:09:24.390 06:48:28 -- target/nvmf_lvol.sh@60 -- # rm -f 00:09:24.390 06:48:28 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:09:24.390 06:48:28 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:09:24.390 06:48:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:24.390 06:48:28 -- nvmf/common.sh@116 -- # sync 00:09:24.390 06:48:28 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:09:24.390 06:48:28 -- nvmf/common.sh@119 -- # set +e 00:09:24.390 06:48:28 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:24.390 06:48:28 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:09:24.390 rmmod nvme_tcp 00:09:24.390 rmmod nvme_fabrics 00:09:24.390 rmmod nvme_keyring 00:09:24.390 06:48:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:24.390 06:48:28 -- nvmf/common.sh@123 -- # set -e 00:09:24.390 06:48:28 -- nvmf/common.sh@124 -- # return 0 00:09:24.390 06:48:28 -- nvmf/common.sh@477 -- # '[' -n 72318 ']' 00:09:24.390 06:48:28 -- nvmf/common.sh@478 -- # killprocess 72318 00:09:24.390 06:48:28 -- common/autotest_common.sh@936 -- # '[' -z 72318 ']' 00:09:24.390 06:48:28 -- common/autotest_common.sh@940 -- # kill -0 72318 00:09:24.390 06:48:28 -- common/autotest_common.sh@941 -- # uname 00:09:24.390 06:48:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:24.390 06:48:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72318 00:09:24.649 06:48:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:24.649 06:48:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:24.649 06:48:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72318' 00:09:24.649 killing process with pid 72318 00:09:24.649 06:48:28 -- common/autotest_common.sh@955 -- # kill 72318 00:09:24.649 06:48:28 -- common/autotest_common.sh@960 -- # wait 72318 00:09:24.649 06:48:29 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:24.649 06:48:29 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:09:24.649 06:48:29 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:09:24.649 06:48:29 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:24.649 06:48:29 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:09:24.649 06:48:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:24.649 06:48:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:24.649 06:48:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:24.649 06:48:29 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:09:24.649 00:09:24.649 real 0m15.759s 00:09:24.649 user 1m5.389s 00:09:24.649 sys 0m4.600s 00:09:24.649 06:48:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:24.649 06:48:29 -- common/autotest_common.sh@10 -- # set +x 00:09:24.649 ************************************ 00:09:24.649 END TEST nvmf_lvol 00:09:24.649 ************************************ 00:09:24.909 06:48:29 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:24.909 06:48:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:24.909 06:48:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:24.909 06:48:29 -- common/autotest_common.sh@10 -- # set +x 00:09:24.909 ************************************ 00:09:24.909 START TEST nvmf_lvs_grow 00:09:24.909 ************************************ 00:09:24.909 06:48:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:24.909 * Looking for test storage... 00:09:24.909 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:24.909 06:48:29 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:24.909 06:48:29 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:24.909 06:48:29 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:24.909 06:48:29 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:24.909 06:48:29 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:24.909 06:48:29 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:24.909 06:48:29 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:24.909 06:48:29 -- scripts/common.sh@335 -- # IFS=.-: 00:09:24.909 06:48:29 -- scripts/common.sh@335 -- # read -ra ver1 00:09:24.909 06:48:29 -- scripts/common.sh@336 -- # IFS=.-: 00:09:24.909 06:48:29 -- scripts/common.sh@336 -- # read -ra ver2 00:09:24.909 06:48:29 -- scripts/common.sh@337 -- # local 'op=<' 00:09:24.909 06:48:29 -- scripts/common.sh@339 -- # ver1_l=2 00:09:24.909 06:48:29 -- scripts/common.sh@340 -- # ver2_l=1 00:09:24.909 06:48:29 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:24.909 06:48:29 -- scripts/common.sh@343 -- # case "$op" in 00:09:24.909 06:48:29 -- scripts/common.sh@344 -- # : 1 00:09:24.909 06:48:29 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:24.909 06:48:29 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:24.909 06:48:29 -- scripts/common.sh@364 -- # decimal 1 00:09:24.909 06:48:29 -- scripts/common.sh@352 -- # local d=1 00:09:24.909 06:48:29 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:24.909 06:48:29 -- scripts/common.sh@354 -- # echo 1 00:09:24.909 06:48:29 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:24.909 06:48:29 -- scripts/common.sh@365 -- # decimal 2 00:09:24.909 06:48:29 -- scripts/common.sh@352 -- # local d=2 00:09:24.909 06:48:29 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:24.909 06:48:29 -- scripts/common.sh@354 -- # echo 2 00:09:24.909 06:48:29 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:24.909 06:48:29 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:24.909 06:48:29 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:24.909 06:48:29 -- scripts/common.sh@367 -- # return 0 00:09:24.909 06:48:29 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:24.909 06:48:29 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:24.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.909 --rc genhtml_branch_coverage=1 00:09:24.909 --rc genhtml_function_coverage=1 00:09:24.909 --rc genhtml_legend=1 00:09:24.909 --rc geninfo_all_blocks=1 00:09:24.909 --rc geninfo_unexecuted_blocks=1 00:09:24.909 00:09:24.909 ' 00:09:24.909 06:48:29 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:24.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.909 --rc genhtml_branch_coverage=1 00:09:24.909 --rc genhtml_function_coverage=1 00:09:24.909 --rc genhtml_legend=1 00:09:24.909 --rc geninfo_all_blocks=1 00:09:24.909 --rc geninfo_unexecuted_blocks=1 00:09:24.909 00:09:24.909 ' 00:09:24.909 06:48:29 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:24.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.909 --rc genhtml_branch_coverage=1 00:09:24.909 --rc genhtml_function_coverage=1 00:09:24.909 --rc genhtml_legend=1 00:09:24.909 --rc geninfo_all_blocks=1 00:09:24.909 --rc geninfo_unexecuted_blocks=1 00:09:24.909 00:09:24.909 ' 00:09:24.909 06:48:29 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:24.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.909 --rc genhtml_branch_coverage=1 00:09:24.909 --rc genhtml_function_coverage=1 00:09:24.909 --rc genhtml_legend=1 00:09:24.909 --rc geninfo_all_blocks=1 00:09:24.909 --rc geninfo_unexecuted_blocks=1 00:09:24.909 00:09:24.909 ' 00:09:24.909 06:48:29 -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:24.909 06:48:29 -- nvmf/common.sh@7 -- # uname -s 00:09:24.909 06:48:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:24.909 06:48:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:24.909 06:48:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:24.909 06:48:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:24.909 06:48:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:24.909 06:48:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:24.909 06:48:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:24.909 06:48:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:24.909 06:48:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:24.909 06:48:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:24.909 06:48:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:657f0c9c-3891-4064-9841-3d87a573b6e7 00:09:24.909 06:48:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=657f0c9c-3891-4064-9841-3d87a573b6e7 00:09:24.909 06:48:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:24.909 06:48:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:24.909 06:48:29 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:24.909 06:48:29 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:24.909 06:48:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:24.909 06:48:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:24.909 06:48:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:24.909 06:48:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.909 06:48:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.909 06:48:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.909 06:48:29 -- paths/export.sh@5 -- # export PATH 00:09:24.909 06:48:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.909 06:48:29 -- nvmf/common.sh@46 -- # : 0 00:09:24.909 06:48:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:24.909 06:48:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:24.909 06:48:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:24.909 06:48:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:24.909 06:48:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:24.909 06:48:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:24.909 06:48:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:24.909 06:48:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:24.909 06:48:29 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:24.909 06:48:29 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:24.909 06:48:29 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:09:24.909 06:48:29 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:24.909 06:48:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:24.909 06:48:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:24.909 06:48:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:24.909 06:48:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:24.909 06:48:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:24.909 06:48:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:24.909 06:48:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:24.909 06:48:29 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:09:24.909 06:48:29 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:09:24.909 06:48:29 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:09:24.909 06:48:29 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:09:24.909 06:48:29 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:09:24.909 06:48:29 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:09:24.909 06:48:29 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:24.909 06:48:29 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:24.909 06:48:29 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:24.909 06:48:29 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:09:24.909 06:48:29 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:24.909 06:48:29 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:24.909 06:48:29 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:24.909 06:48:29 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:24.909 06:48:29 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:24.909 06:48:29 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:24.909 06:48:29 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:24.909 06:48:29 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:24.909 06:48:29 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:09:25.169 06:48:29 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:09:25.169 Cannot find device "nvmf_tgt_br" 00:09:25.169 06:48:29 -- nvmf/common.sh@154 -- # true 00:09:25.169 06:48:29 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:09:25.169 Cannot find device "nvmf_tgt_br2" 00:09:25.169 06:48:29 -- nvmf/common.sh@155 -- # true 00:09:25.169 06:48:29 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:09:25.169 06:48:29 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:09:25.169 Cannot find device "nvmf_tgt_br" 00:09:25.169 06:48:29 -- nvmf/common.sh@157 -- # true 00:09:25.169 06:48:29 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:09:25.169 Cannot find device "nvmf_tgt_br2" 00:09:25.169 06:48:29 -- nvmf/common.sh@158 -- # true 00:09:25.169 06:48:29 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:09:25.169 06:48:29 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:09:25.169 06:48:29 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:25.169 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:25.169 06:48:29 -- nvmf/common.sh@161 -- # true 00:09:25.169 06:48:29 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:25.169 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:25.169 06:48:29 -- nvmf/common.sh@162 -- # true 00:09:25.169 06:48:29 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:09:25.169 06:48:29 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:25.169 06:48:29 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:25.169 06:48:29 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:25.169 06:48:29 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:25.169 06:48:29 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:25.169 06:48:29 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:25.169 06:48:29 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:25.169 06:48:29 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:25.169 06:48:29 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:09:25.169 06:48:29 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:09:25.169 06:48:29 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:09:25.169 06:48:29 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:09:25.169 06:48:29 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:25.169 06:48:29 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:25.169 06:48:29 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:25.169 06:48:29 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:09:25.427 06:48:29 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:09:25.427 06:48:29 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:09:25.427 06:48:29 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:25.427 06:48:29 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:25.427 06:48:29 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:25.427 06:48:29 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:25.427 06:48:29 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:09:25.427 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:25.427 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:09:25.427 00:09:25.427 --- 10.0.0.2 ping statistics --- 00:09:25.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:25.427 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:09:25.427 06:48:29 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:09:25.427 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:25.427 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:09:25.427 00:09:25.427 --- 10.0.0.3 ping statistics --- 00:09:25.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:25.427 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:09:25.427 06:48:29 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:25.427 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:25.427 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:09:25.427 00:09:25.427 --- 10.0.0.1 ping statistics --- 00:09:25.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:25.427 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:09:25.427 06:48:29 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:25.427 06:48:29 -- nvmf/common.sh@421 -- # return 0 00:09:25.427 06:48:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:25.427 06:48:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:25.427 06:48:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:25.427 06:48:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:25.427 06:48:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:25.427 06:48:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:25.427 06:48:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:25.427 06:48:29 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:09:25.427 06:48:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:25.427 06:48:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:25.427 06:48:29 -- common/autotest_common.sh@10 -- # set +x 00:09:25.427 06:48:29 -- nvmf/common.sh@469 -- # nvmfpid=72730 00:09:25.427 06:48:29 -- nvmf/common.sh@470 -- # waitforlisten 72730 00:09:25.427 06:48:29 -- common/autotest_common.sh@829 -- # '[' -z 72730 ']' 00:09:25.427 06:48:29 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:25.427 06:48:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:25.428 06:48:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:25.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:25.428 06:48:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:25.428 06:48:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:25.428 06:48:29 -- common/autotest_common.sh@10 -- # set +x 00:09:25.428 [2024-12-13 06:48:29.825770] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:25.428 [2024-12-13 06:48:29.825855] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:25.686 [2024-12-13 06:48:29.961935] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.686 [2024-12-13 06:48:29.994869] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:25.686 [2024-12-13 06:48:29.995006] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:25.686 [2024-12-13 06:48:29.995017] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:25.686 [2024-12-13 06:48:29.995024] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:25.686 [2024-12-13 06:48:29.995052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.686 06:48:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:25.686 06:48:30 -- common/autotest_common.sh@862 -- # return 0 00:09:25.686 06:48:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:25.686 06:48:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:25.686 06:48:30 -- common/autotest_common.sh@10 -- # set +x 00:09:25.686 06:48:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:25.686 06:48:30 -- target/nvmf_lvs_grow.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:25.944 [2024-12-13 06:48:30.403324] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:25.944 06:48:30 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:09:25.944 06:48:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:25.944 06:48:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:25.944 06:48:30 -- common/autotest_common.sh@10 -- # set +x 00:09:25.944 ************************************ 00:09:25.944 START TEST lvs_grow_clean 00:09:25.944 ************************************ 00:09:25.944 06:48:30 -- common/autotest_common.sh@1114 -- # lvs_grow 00:09:25.944 06:48:30 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:25.944 06:48:30 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:25.944 06:48:30 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:25.944 06:48:30 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:25.944 06:48:30 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:25.944 06:48:30 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:25.944 06:48:30 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:25.944 06:48:30 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:25.944 06:48:30 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:26.543 06:48:30 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:26.543 06:48:30 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:26.543 06:48:30 -- target/nvmf_lvs_grow.sh@28 -- # lvs=b04c70b6-5dab-4928-8fdf-9e4cd7e5461c 00:09:26.543 06:48:30 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b04c70b6-5dab-4928-8fdf-9e4cd7e5461c 00:09:26.543 06:48:30 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:26.802 06:48:31 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:26.802 06:48:31 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:26.802 06:48:31 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b04c70b6-5dab-4928-8fdf-9e4cd7e5461c lvol 150 00:09:27.061 06:48:31 -- target/nvmf_lvs_grow.sh@33 -- # lvol=747069c0-8e18-4461-a784-c6831107701a 00:09:27.061 06:48:31 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:27.061 06:48:31 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:27.320 [2024-12-13 06:48:31.720215] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:27.320 [2024-12-13 06:48:31.720287] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:27.320 true 00:09:27.320 06:48:31 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b04c70b6-5dab-4928-8fdf-9e4cd7e5461c 00:09:27.320 06:48:31 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:27.579 06:48:31 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:27.579 06:48:31 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:27.838 06:48:32 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 747069c0-8e18-4461-a784-c6831107701a 00:09:28.097 06:48:32 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:28.356 [2024-12-13 06:48:32.776936] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:28.356 06:48:32 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:28.615 06:48:33 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=72805 00:09:28.615 06:48:33 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:28.615 06:48:33 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:28.615 06:48:33 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 72805 /var/tmp/bdevperf.sock 00:09:28.615 06:48:33 -- common/autotest_common.sh@829 -- # '[' -z 72805 ']' 00:09:28.615 06:48:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:28.615 06:48:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:28.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:28.615 06:48:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:28.615 06:48:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:28.615 06:48:33 -- common/autotest_common.sh@10 -- # set +x 00:09:28.615 [2024-12-13 06:48:33.097003] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:28.615 [2024-12-13 06:48:33.097091] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72805 ] 00:09:28.874 [2024-12-13 06:48:33.239553] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.874 [2024-12-13 06:48:33.281079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:29.810 06:48:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:29.810 06:48:34 -- common/autotest_common.sh@862 -- # return 0 00:09:29.810 06:48:34 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:29.810 Nvme0n1 00:09:29.810 06:48:34 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:30.069 [ 00:09:30.069 { 00:09:30.069 "name": "Nvme0n1", 00:09:30.069 "aliases": [ 00:09:30.069 "747069c0-8e18-4461-a784-c6831107701a" 00:09:30.069 ], 00:09:30.069 "product_name": "NVMe disk", 00:09:30.069 "block_size": 4096, 00:09:30.069 "num_blocks": 38912, 00:09:30.069 "uuid": "747069c0-8e18-4461-a784-c6831107701a", 00:09:30.069 "assigned_rate_limits": { 00:09:30.069 "rw_ios_per_sec": 0, 00:09:30.069 "rw_mbytes_per_sec": 0, 00:09:30.069 "r_mbytes_per_sec": 0, 00:09:30.069 "w_mbytes_per_sec": 0 00:09:30.069 }, 00:09:30.069 "claimed": false, 00:09:30.069 "zoned": false, 00:09:30.069 "supported_io_types": { 00:09:30.069 "read": true, 00:09:30.069 "write": true, 00:09:30.069 "unmap": true, 00:09:30.069 "write_zeroes": true, 00:09:30.069 "flush": true, 00:09:30.069 "reset": true, 00:09:30.069 "compare": true, 00:09:30.069 "compare_and_write": true, 00:09:30.069 "abort": true, 00:09:30.069 "nvme_admin": true, 00:09:30.069 "nvme_io": true 00:09:30.069 }, 00:09:30.069 "driver_specific": { 00:09:30.069 "nvme": [ 00:09:30.069 { 00:09:30.069 "trid": { 00:09:30.069 "trtype": "TCP", 00:09:30.069 "adrfam": "IPv4", 00:09:30.069 "traddr": "10.0.0.2", 00:09:30.069 "trsvcid": "4420", 00:09:30.069 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:30.069 }, 00:09:30.069 "ctrlr_data": { 00:09:30.069 "cntlid": 1, 00:09:30.069 "vendor_id": "0x8086", 00:09:30.069 "model_number": "SPDK bdev Controller", 00:09:30.069 "serial_number": "SPDK0", 00:09:30.069 "firmware_revision": "24.01.1", 00:09:30.069 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:30.069 "oacs": { 00:09:30.069 "security": 0, 00:09:30.069 "format": 0, 00:09:30.069 "firmware": 0, 00:09:30.069 "ns_manage": 0 00:09:30.069 }, 00:09:30.069 "multi_ctrlr": true, 00:09:30.069 "ana_reporting": false 00:09:30.069 }, 00:09:30.069 "vs": { 00:09:30.069 "nvme_version": "1.3" 00:09:30.069 }, 00:09:30.069 "ns_data": { 00:09:30.069 "id": 1, 00:09:30.069 "can_share": true 00:09:30.069 } 00:09:30.069 } 00:09:30.069 ], 00:09:30.069 "mp_policy": "active_passive" 00:09:30.069 } 00:09:30.069 } 00:09:30.069 ] 00:09:30.069 06:48:34 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=72834 00:09:30.069 06:48:34 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:30.069 06:48:34 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:30.328 Running I/O for 10 seconds... 00:09:31.262 Latency(us) 00:09:31.262 [2024-12-13T06:48:35.781Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:31.262 [2024-12-13T06:48:35.781Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:31.262 Nvme0n1 : 1.00 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:09:31.262 [2024-12-13T06:48:35.781Z] =================================================================================================================== 00:09:31.262 [2024-12-13T06:48:35.781Z] Total : 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:09:31.262 00:09:32.198 06:48:36 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b04c70b6-5dab-4928-8fdf-9e4cd7e5461c 00:09:32.198 [2024-12-13T06:48:36.717Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:32.198 Nvme0n1 : 2.00 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:09:32.198 [2024-12-13T06:48:36.717Z] =================================================================================================================== 00:09:32.198 [2024-12-13T06:48:36.717Z] Total : 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:09:32.198 00:09:32.457 true 00:09:32.457 06:48:36 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b04c70b6-5dab-4928-8fdf-9e4cd7e5461c 00:09:32.457 06:48:36 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:32.716 06:48:37 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:32.716 06:48:37 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:32.716 06:48:37 -- target/nvmf_lvs_grow.sh@65 -- # wait 72834 00:09:33.283 [2024-12-13T06:48:37.802Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:33.283 Nvme0n1 : 3.00 6815.67 26.62 0.00 0.00 0.00 0.00 0.00 00:09:33.283 [2024-12-13T06:48:37.802Z] =================================================================================================================== 00:09:33.283 [2024-12-13T06:48:37.802Z] Total : 6815.67 26.62 0.00 0.00 0.00 0.00 0.00 00:09:33.283 00:09:34.221 [2024-12-13T06:48:38.740Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:34.221 Nvme0n1 : 4.00 6762.75 26.42 0.00 0.00 0.00 0.00 0.00 00:09:34.221 [2024-12-13T06:48:38.740Z] =================================================================================================================== 00:09:34.221 [2024-12-13T06:48:38.740Z] Total : 6762.75 26.42 0.00 0.00 0.00 0.00 0.00 00:09:34.221 00:09:35.598 [2024-12-13T06:48:40.117Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:35.598 Nvme0n1 : 5.00 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:09:35.598 [2024-12-13T06:48:40.117Z] =================================================================================================================== 00:09:35.598 [2024-12-13T06:48:40.117Z] Total : 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:09:35.598 00:09:36.535 [2024-12-13T06:48:41.054Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:36.535 Nvme0n1 : 6.00 6667.50 26.04 0.00 0.00 0.00 0.00 0.00 00:09:36.535 [2024-12-13T06:48:41.054Z] =================================================================================================================== 00:09:36.535 [2024-12-13T06:48:41.054Z] Total : 6667.50 26.04 0.00 0.00 0.00 0.00 0.00 00:09:36.535 00:09:37.472 [2024-12-13T06:48:41.992Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:37.473 Nvme0n1 : 7.00 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:09:37.473 [2024-12-13T06:48:41.992Z] =================================================================================================================== 00:09:37.473 [2024-12-13T06:48:41.992Z] Total : 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:09:37.473 00:09:38.424 [2024-12-13T06:48:42.943Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:38.424 Nvme0n1 : 8.00 6571.88 25.67 0.00 0.00 0.00 0.00 0.00 00:09:38.424 [2024-12-13T06:48:42.943Z] =================================================================================================================== 00:09:38.424 [2024-12-13T06:48:42.943Z] Total : 6571.88 25.67 0.00 0.00 0.00 0.00 0.00 00:09:38.424 00:09:39.390 [2024-12-13T06:48:43.909Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:39.390 Nvme0n1 : 9.00 6547.22 25.58 0.00 0.00 0.00 0.00 0.00 00:09:39.390 [2024-12-13T06:48:43.909Z] =================================================================================================================== 00:09:39.390 [2024-12-13T06:48:43.909Z] Total : 6547.22 25.58 0.00 0.00 0.00 0.00 0.00 00:09:39.390 00:09:40.327 [2024-12-13T06:48:44.846Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:40.327 Nvme0n1 : 10.00 6499.00 25.39 0.00 0.00 0.00 0.00 0.00 00:09:40.327 [2024-12-13T06:48:44.846Z] =================================================================================================================== 00:09:40.327 [2024-12-13T06:48:44.846Z] Total : 6499.00 25.39 0.00 0.00 0.00 0.00 0.00 00:09:40.327 00:09:40.327 00:09:40.327 Latency(us) 00:09:40.327 [2024-12-13T06:48:44.846Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:40.327 [2024-12-13T06:48:44.846Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:40.327 Nvme0n1 : 10.00 6509.63 25.43 0.00 0.00 19657.51 13166.78 63391.19 00:09:40.327 [2024-12-13T06:48:44.846Z] =================================================================================================================== 00:09:40.327 [2024-12-13T06:48:44.846Z] Total : 6509.63 25.43 0.00 0.00 19657.51 13166.78 63391.19 00:09:40.327 0 00:09:40.327 06:48:44 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 72805 00:09:40.327 06:48:44 -- common/autotest_common.sh@936 -- # '[' -z 72805 ']' 00:09:40.327 06:48:44 -- common/autotest_common.sh@940 -- # kill -0 72805 00:09:40.327 06:48:44 -- common/autotest_common.sh@941 -- # uname 00:09:40.327 06:48:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:40.327 06:48:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72805 00:09:40.327 06:48:44 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:09:40.327 06:48:44 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:09:40.327 killing process with pid 72805 00:09:40.327 06:48:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72805' 00:09:40.327 Received shutdown signal, test time was about 10.000000 seconds 00:09:40.327 00:09:40.327 Latency(us) 00:09:40.327 [2024-12-13T06:48:44.846Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:40.327 [2024-12-13T06:48:44.846Z] =================================================================================================================== 00:09:40.327 [2024-12-13T06:48:44.846Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:40.327 06:48:44 -- common/autotest_common.sh@955 -- # kill 72805 00:09:40.327 06:48:44 -- common/autotest_common.sh@960 -- # wait 72805 00:09:40.586 06:48:44 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:40.844 06:48:45 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b04c70b6-5dab-4928-8fdf-9e4cd7e5461c 00:09:40.844 06:48:45 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:09:41.102 06:48:45 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:09:41.102 06:48:45 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:09:41.102 06:48:45 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:41.361 [2024-12-13 06:48:45.687087] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:41.361 06:48:45 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b04c70b6-5dab-4928-8fdf-9e4cd7e5461c 00:09:41.361 06:48:45 -- common/autotest_common.sh@650 -- # local es=0 00:09:41.361 06:48:45 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b04c70b6-5dab-4928-8fdf-9e4cd7e5461c 00:09:41.361 06:48:45 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:41.361 06:48:45 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:41.361 06:48:45 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:41.361 06:48:45 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:41.361 06:48:45 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:41.361 06:48:45 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:41.361 06:48:45 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:41.361 06:48:45 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:41.361 06:48:45 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b04c70b6-5dab-4928-8fdf-9e4cd7e5461c 00:09:41.620 request: 00:09:41.620 { 00:09:41.620 "uuid": "b04c70b6-5dab-4928-8fdf-9e4cd7e5461c", 00:09:41.620 "method": "bdev_lvol_get_lvstores", 00:09:41.620 "req_id": 1 00:09:41.620 } 00:09:41.620 Got JSON-RPC error response 00:09:41.620 response: 00:09:41.620 { 00:09:41.620 "code": -19, 00:09:41.620 "message": "No such device" 00:09:41.620 } 00:09:41.620 06:48:45 -- common/autotest_common.sh@653 -- # es=1 00:09:41.620 06:48:45 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:41.620 06:48:45 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:41.620 06:48:45 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:41.620 06:48:45 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:41.880 aio_bdev 00:09:41.880 06:48:46 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 747069c0-8e18-4461-a784-c6831107701a 00:09:41.880 06:48:46 -- common/autotest_common.sh@897 -- # local bdev_name=747069c0-8e18-4461-a784-c6831107701a 00:09:41.880 06:48:46 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:41.880 06:48:46 -- common/autotest_common.sh@899 -- # local i 00:09:41.880 06:48:46 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:41.880 06:48:46 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:41.880 06:48:46 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:42.139 06:48:46 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 747069c0-8e18-4461-a784-c6831107701a -t 2000 00:09:42.139 [ 00:09:42.139 { 00:09:42.139 "name": "747069c0-8e18-4461-a784-c6831107701a", 00:09:42.139 "aliases": [ 00:09:42.139 "lvs/lvol" 00:09:42.139 ], 00:09:42.139 "product_name": "Logical Volume", 00:09:42.139 "block_size": 4096, 00:09:42.139 "num_blocks": 38912, 00:09:42.139 "uuid": "747069c0-8e18-4461-a784-c6831107701a", 00:09:42.139 "assigned_rate_limits": { 00:09:42.139 "rw_ios_per_sec": 0, 00:09:42.139 "rw_mbytes_per_sec": 0, 00:09:42.139 "r_mbytes_per_sec": 0, 00:09:42.139 "w_mbytes_per_sec": 0 00:09:42.139 }, 00:09:42.139 "claimed": false, 00:09:42.139 "zoned": false, 00:09:42.139 "supported_io_types": { 00:09:42.139 "read": true, 00:09:42.139 "write": true, 00:09:42.139 "unmap": true, 00:09:42.139 "write_zeroes": true, 00:09:42.139 "flush": false, 00:09:42.139 "reset": true, 00:09:42.139 "compare": false, 00:09:42.139 "compare_and_write": false, 00:09:42.139 "abort": false, 00:09:42.139 "nvme_admin": false, 00:09:42.139 "nvme_io": false 00:09:42.139 }, 00:09:42.139 "driver_specific": { 00:09:42.139 "lvol": { 00:09:42.139 "lvol_store_uuid": "b04c70b6-5dab-4928-8fdf-9e4cd7e5461c", 00:09:42.139 "base_bdev": "aio_bdev", 00:09:42.139 "thin_provision": false, 00:09:42.139 "snapshot": false, 00:09:42.139 "clone": false, 00:09:42.139 "esnap_clone": false 00:09:42.139 } 00:09:42.139 } 00:09:42.139 } 00:09:42.139 ] 00:09:42.398 06:48:46 -- common/autotest_common.sh@905 -- # return 0 00:09:42.398 06:48:46 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:09:42.398 06:48:46 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b04c70b6-5dab-4928-8fdf-9e4cd7e5461c 00:09:42.398 06:48:46 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:09:42.398 06:48:46 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b04c70b6-5dab-4928-8fdf-9e4cd7e5461c 00:09:42.398 06:48:46 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:09:42.657 06:48:47 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:09:42.657 06:48:47 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 747069c0-8e18-4461-a784-c6831107701a 00:09:42.916 06:48:47 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b04c70b6-5dab-4928-8fdf-9e4cd7e5461c 00:09:43.175 06:48:47 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:43.743 06:48:47 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:44.003 ************************************ 00:09:44.003 END TEST lvs_grow_clean 00:09:44.003 ************************************ 00:09:44.003 00:09:44.003 real 0m17.958s 00:09:44.003 user 0m17.064s 00:09:44.003 sys 0m2.352s 00:09:44.003 06:48:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:44.003 06:48:48 -- common/autotest_common.sh@10 -- # set +x 00:09:44.003 06:48:48 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:44.003 06:48:48 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:44.003 06:48:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:44.003 06:48:48 -- common/autotest_common.sh@10 -- # set +x 00:09:44.003 ************************************ 00:09:44.003 START TEST lvs_grow_dirty 00:09:44.003 ************************************ 00:09:44.003 06:48:48 -- common/autotest_common.sh@1114 -- # lvs_grow dirty 00:09:44.003 06:48:48 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:44.003 06:48:48 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:44.003 06:48:48 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:44.003 06:48:48 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:44.003 06:48:48 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:44.003 06:48:48 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:44.003 06:48:48 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:44.003 06:48:48 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:44.003 06:48:48 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:44.572 06:48:48 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:44.572 06:48:48 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:44.572 06:48:48 -- target/nvmf_lvs_grow.sh@28 -- # lvs=ccf9e246-f985-414f-84f8-5adbed9ec663 00:09:44.572 06:48:48 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:44.572 06:48:48 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ccf9e246-f985-414f-84f8-5adbed9ec663 00:09:44.830 06:48:49 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:44.830 06:48:49 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:44.830 06:48:49 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ccf9e246-f985-414f-84f8-5adbed9ec663 lvol 150 00:09:45.089 06:48:49 -- target/nvmf_lvs_grow.sh@33 -- # lvol=26b3e1f6-493e-49a9-a570-6c24b9dd4eea 00:09:45.089 06:48:49 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:45.089 06:48:49 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:45.348 [2024-12-13 06:48:49.755405] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:45.348 [2024-12-13 06:48:49.755766] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:45.348 true 00:09:45.348 06:48:49 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ccf9e246-f985-414f-84f8-5adbed9ec663 00:09:45.348 06:48:49 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:45.607 06:48:50 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:45.607 06:48:50 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:45.866 06:48:50 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 26b3e1f6-493e-49a9-a570-6c24b9dd4eea 00:09:46.124 06:48:50 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:46.384 06:48:50 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:46.642 06:48:50 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=73074 00:09:46.643 06:48:50 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:46.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:46.643 06:48:50 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:46.643 06:48:50 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 73074 /var/tmp/bdevperf.sock 00:09:46.643 06:48:50 -- common/autotest_common.sh@829 -- # '[' -z 73074 ']' 00:09:46.643 06:48:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:46.643 06:48:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:46.643 06:48:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:46.643 06:48:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:46.643 06:48:50 -- common/autotest_common.sh@10 -- # set +x 00:09:46.643 [2024-12-13 06:48:51.005639] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:46.643 [2024-12-13 06:48:51.008428] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73074 ] 00:09:46.643 [2024-12-13 06:48:51.142855] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.902 [2024-12-13 06:48:51.181428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:47.839 06:48:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:47.839 06:48:51 -- common/autotest_common.sh@862 -- # return 0 00:09:47.839 06:48:51 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:47.839 Nvme0n1 00:09:47.839 06:48:52 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:48.098 [ 00:09:48.098 { 00:09:48.098 "name": "Nvme0n1", 00:09:48.098 "aliases": [ 00:09:48.098 "26b3e1f6-493e-49a9-a570-6c24b9dd4eea" 00:09:48.098 ], 00:09:48.098 "product_name": "NVMe disk", 00:09:48.098 "block_size": 4096, 00:09:48.098 "num_blocks": 38912, 00:09:48.098 "uuid": "26b3e1f6-493e-49a9-a570-6c24b9dd4eea", 00:09:48.098 "assigned_rate_limits": { 00:09:48.098 "rw_ios_per_sec": 0, 00:09:48.098 "rw_mbytes_per_sec": 0, 00:09:48.098 "r_mbytes_per_sec": 0, 00:09:48.098 "w_mbytes_per_sec": 0 00:09:48.098 }, 00:09:48.098 "claimed": false, 00:09:48.098 "zoned": false, 00:09:48.098 "supported_io_types": { 00:09:48.098 "read": true, 00:09:48.098 "write": true, 00:09:48.098 "unmap": true, 00:09:48.098 "write_zeroes": true, 00:09:48.098 "flush": true, 00:09:48.098 "reset": true, 00:09:48.098 "compare": true, 00:09:48.098 "compare_and_write": true, 00:09:48.098 "abort": true, 00:09:48.098 "nvme_admin": true, 00:09:48.098 "nvme_io": true 00:09:48.098 }, 00:09:48.098 "driver_specific": { 00:09:48.098 "nvme": [ 00:09:48.098 { 00:09:48.098 "trid": { 00:09:48.098 "trtype": "TCP", 00:09:48.098 "adrfam": "IPv4", 00:09:48.098 "traddr": "10.0.0.2", 00:09:48.098 "trsvcid": "4420", 00:09:48.098 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:48.098 }, 00:09:48.098 "ctrlr_data": { 00:09:48.098 "cntlid": 1, 00:09:48.098 "vendor_id": "0x8086", 00:09:48.098 "model_number": "SPDK bdev Controller", 00:09:48.098 "serial_number": "SPDK0", 00:09:48.098 "firmware_revision": "24.01.1", 00:09:48.098 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:48.098 "oacs": { 00:09:48.098 "security": 0, 00:09:48.098 "format": 0, 00:09:48.098 "firmware": 0, 00:09:48.098 "ns_manage": 0 00:09:48.098 }, 00:09:48.098 "multi_ctrlr": true, 00:09:48.098 "ana_reporting": false 00:09:48.098 }, 00:09:48.098 "vs": { 00:09:48.098 "nvme_version": "1.3" 00:09:48.098 }, 00:09:48.098 "ns_data": { 00:09:48.098 "id": 1, 00:09:48.098 "can_share": true 00:09:48.098 } 00:09:48.098 } 00:09:48.098 ], 00:09:48.098 "mp_policy": "active_passive" 00:09:48.098 } 00:09:48.098 } 00:09:48.098 ] 00:09:48.098 06:48:52 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=73097 00:09:48.098 06:48:52 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:48.098 06:48:52 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:48.357 Running I/O for 10 seconds... 00:09:49.317 Latency(us) 00:09:49.317 [2024-12-13T06:48:53.836Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:49.317 [2024-12-13T06:48:53.837Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:49.318 Nvme0n1 : 1.00 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:09:49.318 [2024-12-13T06:48:53.837Z] =================================================================================================================== 00:09:49.318 [2024-12-13T06:48:53.837Z] Total : 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:09:49.318 00:09:50.254 06:48:54 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u ccf9e246-f985-414f-84f8-5adbed9ec663 00:09:50.254 [2024-12-13T06:48:54.773Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:50.254 Nvme0n1 : 2.00 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:09:50.254 [2024-12-13T06:48:54.773Z] =================================================================================================================== 00:09:50.254 [2024-12-13T06:48:54.773Z] Total : 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:09:50.254 00:09:50.513 true 00:09:50.513 06:48:54 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ccf9e246-f985-414f-84f8-5adbed9ec663 00:09:50.513 06:48:54 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:50.774 06:48:55 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:50.774 06:48:55 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:50.774 06:48:55 -- target/nvmf_lvs_grow.sh@65 -- # wait 73097 00:09:51.344 [2024-12-13T06:48:55.863Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:51.344 Nvme0n1 : 3.00 6561.67 25.63 0.00 0.00 0.00 0.00 0.00 00:09:51.344 [2024-12-13T06:48:55.863Z] =================================================================================================================== 00:09:51.344 [2024-12-13T06:48:55.863Z] Total : 6561.67 25.63 0.00 0.00 0.00 0.00 0.00 00:09:51.344 00:09:52.280 [2024-12-13T06:48:56.799Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:52.280 Nvme0n1 : 4.00 6572.25 25.67 0.00 0.00 0.00 0.00 0.00 00:09:52.280 [2024-12-13T06:48:56.799Z] =================================================================================================================== 00:09:52.280 [2024-12-13T06:48:56.799Z] Total : 6572.25 25.67 0.00 0.00 0.00 0.00 0.00 00:09:52.280 00:09:53.216 [2024-12-13T06:48:57.735Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:53.216 Nvme0n1 : 5.00 6629.40 25.90 0.00 0.00 0.00 0.00 0.00 00:09:53.216 [2024-12-13T06:48:57.735Z] =================================================================================================================== 00:09:53.216 [2024-12-13T06:48:57.735Z] Total : 6629.40 25.90 0.00 0.00 0.00 0.00 0.00 00:09:53.216 00:09:54.152 [2024-12-13T06:48:58.671Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:54.152 Nvme0n1 : 6.00 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:09:54.152 [2024-12-13T06:48:58.671Z] =================================================================================================================== 00:09:54.152 [2024-12-13T06:48:58.671Z] Total : 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:09:54.152 00:09:55.529 [2024-12-13T06:49:00.048Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:55.529 Nvme0n1 : 7.00 6622.14 25.87 0.00 0.00 0.00 0.00 0.00 00:09:55.529 [2024-12-13T06:49:00.048Z] =================================================================================================================== 00:09:55.529 [2024-12-13T06:49:00.048Z] Total : 6622.14 25.87 0.00 0.00 0.00 0.00 0.00 00:09:55.529 00:09:56.466 [2024-12-13T06:49:00.985Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:56.466 Nvme0n1 : 8.00 6566.38 25.65 0.00 0.00 0.00 0.00 0.00 00:09:56.466 [2024-12-13T06:49:00.985Z] =================================================================================================================== 00:09:56.466 [2024-12-13T06:49:00.985Z] Total : 6566.38 25.65 0.00 0.00 0.00 0.00 0.00 00:09:56.466 00:09:57.402 [2024-12-13T06:49:01.921Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:57.402 Nvme0n1 : 9.00 6542.33 25.56 0.00 0.00 0.00 0.00 0.00 00:09:57.402 [2024-12-13T06:49:01.921Z] =================================================================================================================== 00:09:57.402 [2024-12-13T06:49:01.921Z] Total : 6542.33 25.56 0.00 0.00 0.00 0.00 0.00 00:09:57.402 00:09:58.338 [2024-12-13T06:49:02.857Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:58.338 Nvme0n1 : 10.00 6523.10 25.48 0.00 0.00 0.00 0.00 0.00 00:09:58.338 [2024-12-13T06:49:02.857Z] =================================================================================================================== 00:09:58.338 [2024-12-13T06:49:02.857Z] Total : 6523.10 25.48 0.00 0.00 0.00 0.00 0.00 00:09:58.338 00:09:58.338 00:09:58.338 Latency(us) 00:09:58.338 [2024-12-13T06:49:02.857Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:58.338 [2024-12-13T06:49:02.857Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:58.338 Nvme0n1 : 10.00 6532.64 25.52 0.00 0.00 19589.58 14477.50 78643.20 00:09:58.338 [2024-12-13T06:49:02.857Z] =================================================================================================================== 00:09:58.338 [2024-12-13T06:49:02.857Z] Total : 6532.64 25.52 0.00 0.00 19589.58 14477.50 78643.20 00:09:58.338 0 00:09:58.338 06:49:02 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 73074 00:09:58.338 06:49:02 -- common/autotest_common.sh@936 -- # '[' -z 73074 ']' 00:09:58.338 06:49:02 -- common/autotest_common.sh@940 -- # kill -0 73074 00:09:58.338 06:49:02 -- common/autotest_common.sh@941 -- # uname 00:09:58.338 06:49:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:58.338 06:49:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73074 00:09:58.338 killing process with pid 73074 00:09:58.338 Received shutdown signal, test time was about 10.000000 seconds 00:09:58.338 00:09:58.338 Latency(us) 00:09:58.338 [2024-12-13T06:49:02.857Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:58.338 [2024-12-13T06:49:02.857Z] =================================================================================================================== 00:09:58.338 [2024-12-13T06:49:02.857Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:58.338 06:49:02 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:09:58.338 06:49:02 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:09:58.338 06:49:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73074' 00:09:58.338 06:49:02 -- common/autotest_common.sh@955 -- # kill 73074 00:09:58.338 06:49:02 -- common/autotest_common.sh@960 -- # wait 73074 00:09:58.597 06:49:02 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:58.855 06:49:03 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ccf9e246-f985-414f-84f8-5adbed9ec663 00:09:58.855 06:49:03 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:09:59.114 06:49:03 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:09:59.114 06:49:03 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:09:59.114 06:49:03 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 72730 00:09:59.114 06:49:03 -- target/nvmf_lvs_grow.sh@74 -- # wait 72730 00:09:59.114 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 72730 Killed "${NVMF_APP[@]}" "$@" 00:09:59.114 06:49:03 -- target/nvmf_lvs_grow.sh@74 -- # true 00:09:59.114 06:49:03 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:09:59.114 06:49:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:59.114 06:49:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:59.114 06:49:03 -- common/autotest_common.sh@10 -- # set +x 00:09:59.114 06:49:03 -- nvmf/common.sh@469 -- # nvmfpid=73229 00:09:59.114 06:49:03 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:59.114 06:49:03 -- nvmf/common.sh@470 -- # waitforlisten 73229 00:09:59.114 06:49:03 -- common/autotest_common.sh@829 -- # '[' -z 73229 ']' 00:09:59.114 06:49:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:59.114 06:49:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:59.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:59.114 06:49:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:59.114 06:49:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:59.114 06:49:03 -- common/autotest_common.sh@10 -- # set +x 00:09:59.114 [2024-12-13 06:49:03.463236] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:59.114 [2024-12-13 06:49:03.463332] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:59.114 [2024-12-13 06:49:03.595326] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.115 [2024-12-13 06:49:03.629157] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:59.115 [2024-12-13 06:49:03.629309] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:59.115 [2024-12-13 06:49:03.629325] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:59.115 [2024-12-13 06:49:03.629334] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:59.115 [2024-12-13 06:49:03.629388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.051 06:49:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:00.051 06:49:04 -- common/autotest_common.sh@862 -- # return 0 00:10:00.051 06:49:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:00.051 06:49:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:00.051 06:49:04 -- common/autotest_common.sh@10 -- # set +x 00:10:00.051 06:49:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:00.051 06:49:04 -- target/nvmf_lvs_grow.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:00.310 [2024-12-13 06:49:04.741744] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:10:00.310 [2024-12-13 06:49:04.742196] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:10:00.310 [2024-12-13 06:49:04.742583] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:10:00.310 06:49:04 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:10:00.310 06:49:04 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 26b3e1f6-493e-49a9-a570-6c24b9dd4eea 00:10:00.310 06:49:04 -- common/autotest_common.sh@897 -- # local bdev_name=26b3e1f6-493e-49a9-a570-6c24b9dd4eea 00:10:00.310 06:49:04 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:00.310 06:49:04 -- common/autotest_common.sh@899 -- # local i 00:10:00.310 06:49:04 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:00.310 06:49:04 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:00.310 06:49:04 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:00.569 06:49:05 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 26b3e1f6-493e-49a9-a570-6c24b9dd4eea -t 2000 00:10:00.827 [ 00:10:00.827 { 00:10:00.827 "name": "26b3e1f6-493e-49a9-a570-6c24b9dd4eea", 00:10:00.827 "aliases": [ 00:10:00.827 "lvs/lvol" 00:10:00.827 ], 00:10:00.827 "product_name": "Logical Volume", 00:10:00.827 "block_size": 4096, 00:10:00.827 "num_blocks": 38912, 00:10:00.827 "uuid": "26b3e1f6-493e-49a9-a570-6c24b9dd4eea", 00:10:00.827 "assigned_rate_limits": { 00:10:00.827 "rw_ios_per_sec": 0, 00:10:00.827 "rw_mbytes_per_sec": 0, 00:10:00.827 "r_mbytes_per_sec": 0, 00:10:00.827 "w_mbytes_per_sec": 0 00:10:00.827 }, 00:10:00.827 "claimed": false, 00:10:00.827 "zoned": false, 00:10:00.827 "supported_io_types": { 00:10:00.827 "read": true, 00:10:00.827 "write": true, 00:10:00.827 "unmap": true, 00:10:00.827 "write_zeroes": true, 00:10:00.827 "flush": false, 00:10:00.827 "reset": true, 00:10:00.827 "compare": false, 00:10:00.827 "compare_and_write": false, 00:10:00.827 "abort": false, 00:10:00.827 "nvme_admin": false, 00:10:00.827 "nvme_io": false 00:10:00.827 }, 00:10:00.827 "driver_specific": { 00:10:00.827 "lvol": { 00:10:00.827 "lvol_store_uuid": "ccf9e246-f985-414f-84f8-5adbed9ec663", 00:10:00.827 "base_bdev": "aio_bdev", 00:10:00.827 "thin_provision": false, 00:10:00.827 "snapshot": false, 00:10:00.827 "clone": false, 00:10:00.827 "esnap_clone": false 00:10:00.827 } 00:10:00.827 } 00:10:00.827 } 00:10:00.827 ] 00:10:00.827 06:49:05 -- common/autotest_common.sh@905 -- # return 0 00:10:00.827 06:49:05 -- target/nvmf_lvs_grow.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ccf9e246-f985-414f-84f8-5adbed9ec663 00:10:00.827 06:49:05 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:10:01.085 06:49:05 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:10:01.085 06:49:05 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:10:01.085 06:49:05 -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ccf9e246-f985-414f-84f8-5adbed9ec663 00:10:01.343 06:49:05 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:10:01.343 06:49:05 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:01.613 [2024-12-13 06:49:05.963499] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:01.613 06:49:05 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ccf9e246-f985-414f-84f8-5adbed9ec663 00:10:01.613 06:49:05 -- common/autotest_common.sh@650 -- # local es=0 00:10:01.613 06:49:05 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ccf9e246-f985-414f-84f8-5adbed9ec663 00:10:01.613 06:49:05 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:01.613 06:49:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:01.613 06:49:05 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:01.613 06:49:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:01.613 06:49:05 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:01.613 06:49:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:01.613 06:49:06 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:01.613 06:49:06 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:01.613 06:49:06 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ccf9e246-f985-414f-84f8-5adbed9ec663 00:10:01.883 request: 00:10:01.883 { 00:10:01.883 "uuid": "ccf9e246-f985-414f-84f8-5adbed9ec663", 00:10:01.883 "method": "bdev_lvol_get_lvstores", 00:10:01.883 "req_id": 1 00:10:01.883 } 00:10:01.883 Got JSON-RPC error response 00:10:01.883 response: 00:10:01.883 { 00:10:01.883 "code": -19, 00:10:01.883 "message": "No such device" 00:10:01.883 } 00:10:01.883 06:49:06 -- common/autotest_common.sh@653 -- # es=1 00:10:01.883 06:49:06 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:01.883 06:49:06 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:01.883 06:49:06 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:01.883 06:49:06 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:02.142 aio_bdev 00:10:02.142 06:49:06 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 26b3e1f6-493e-49a9-a570-6c24b9dd4eea 00:10:02.142 06:49:06 -- common/autotest_common.sh@897 -- # local bdev_name=26b3e1f6-493e-49a9-a570-6c24b9dd4eea 00:10:02.142 06:49:06 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:02.142 06:49:06 -- common/autotest_common.sh@899 -- # local i 00:10:02.142 06:49:06 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:02.142 06:49:06 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:02.142 06:49:06 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:02.401 06:49:06 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 26b3e1f6-493e-49a9-a570-6c24b9dd4eea -t 2000 00:10:02.401 [ 00:10:02.401 { 00:10:02.401 "name": "26b3e1f6-493e-49a9-a570-6c24b9dd4eea", 00:10:02.401 "aliases": [ 00:10:02.401 "lvs/lvol" 00:10:02.401 ], 00:10:02.401 "product_name": "Logical Volume", 00:10:02.401 "block_size": 4096, 00:10:02.401 "num_blocks": 38912, 00:10:02.401 "uuid": "26b3e1f6-493e-49a9-a570-6c24b9dd4eea", 00:10:02.401 "assigned_rate_limits": { 00:10:02.401 "rw_ios_per_sec": 0, 00:10:02.401 "rw_mbytes_per_sec": 0, 00:10:02.401 "r_mbytes_per_sec": 0, 00:10:02.401 "w_mbytes_per_sec": 0 00:10:02.401 }, 00:10:02.401 "claimed": false, 00:10:02.401 "zoned": false, 00:10:02.401 "supported_io_types": { 00:10:02.401 "read": true, 00:10:02.401 "write": true, 00:10:02.401 "unmap": true, 00:10:02.401 "write_zeroes": true, 00:10:02.401 "flush": false, 00:10:02.401 "reset": true, 00:10:02.401 "compare": false, 00:10:02.401 "compare_and_write": false, 00:10:02.401 "abort": false, 00:10:02.401 "nvme_admin": false, 00:10:02.401 "nvme_io": false 00:10:02.401 }, 00:10:02.401 "driver_specific": { 00:10:02.401 "lvol": { 00:10:02.401 "lvol_store_uuid": "ccf9e246-f985-414f-84f8-5adbed9ec663", 00:10:02.401 "base_bdev": "aio_bdev", 00:10:02.401 "thin_provision": false, 00:10:02.401 "snapshot": false, 00:10:02.401 "clone": false, 00:10:02.401 "esnap_clone": false 00:10:02.401 } 00:10:02.401 } 00:10:02.401 } 00:10:02.401 ] 00:10:02.401 06:49:06 -- common/autotest_common.sh@905 -- # return 0 00:10:02.401 06:49:06 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ccf9e246-f985-414f-84f8-5adbed9ec663 00:10:02.401 06:49:06 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:10:02.660 06:49:07 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:10:02.660 06:49:07 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ccf9e246-f985-414f-84f8-5adbed9ec663 00:10:02.660 06:49:07 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:10:02.919 06:49:07 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:10:02.919 06:49:07 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 26b3e1f6-493e-49a9-a570-6c24b9dd4eea 00:10:03.178 06:49:07 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ccf9e246-f985-414f-84f8-5adbed9ec663 00:10:03.437 06:49:07 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:03.696 06:49:08 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:03.955 ************************************ 00:10:03.955 END TEST lvs_grow_dirty 00:10:03.955 ************************************ 00:10:03.955 00:10:03.955 real 0m19.957s 00:10:03.955 user 0m39.614s 00:10:03.955 sys 0m9.256s 00:10:03.955 06:49:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:03.955 06:49:08 -- common/autotest_common.sh@10 -- # set +x 00:10:03.955 06:49:08 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:10:03.955 06:49:08 -- common/autotest_common.sh@806 -- # type=--id 00:10:03.955 06:49:08 -- common/autotest_common.sh@807 -- # id=0 00:10:03.955 06:49:08 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:10:03.955 06:49:08 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:10:03.955 06:49:08 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:10:03.955 06:49:08 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:10:03.955 06:49:08 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:10:03.955 06:49:08 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:10:03.955 nvmf_trace.0 00:10:04.214 06:49:08 -- common/autotest_common.sh@821 -- # return 0 00:10:04.214 06:49:08 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:10:04.214 06:49:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:04.214 06:49:08 -- nvmf/common.sh@116 -- # sync 00:10:04.473 06:49:08 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:04.473 06:49:08 -- nvmf/common.sh@119 -- # set +e 00:10:04.473 06:49:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:04.473 06:49:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:04.473 rmmod nvme_tcp 00:10:04.473 rmmod nvme_fabrics 00:10:04.473 rmmod nvme_keyring 00:10:04.473 06:49:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:04.473 06:49:08 -- nvmf/common.sh@123 -- # set -e 00:10:04.473 06:49:08 -- nvmf/common.sh@124 -- # return 0 00:10:04.473 06:49:08 -- nvmf/common.sh@477 -- # '[' -n 73229 ']' 00:10:04.473 06:49:08 -- nvmf/common.sh@478 -- # killprocess 73229 00:10:04.473 06:49:08 -- common/autotest_common.sh@936 -- # '[' -z 73229 ']' 00:10:04.473 06:49:08 -- common/autotest_common.sh@940 -- # kill -0 73229 00:10:04.473 06:49:08 -- common/autotest_common.sh@941 -- # uname 00:10:04.473 06:49:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:04.473 06:49:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73229 00:10:04.473 killing process with pid 73229 00:10:04.473 06:49:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:04.473 06:49:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:04.473 06:49:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73229' 00:10:04.473 06:49:08 -- common/autotest_common.sh@955 -- # kill 73229 00:10:04.473 06:49:08 -- common/autotest_common.sh@960 -- # wait 73229 00:10:04.731 06:49:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:04.731 06:49:09 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:04.731 06:49:09 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:04.731 06:49:09 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:04.731 06:49:09 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:04.731 06:49:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:04.731 06:49:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:04.731 06:49:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:04.731 06:49:09 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:04.731 ************************************ 00:10:04.731 END TEST nvmf_lvs_grow 00:10:04.731 ************************************ 00:10:04.731 00:10:04.731 real 0m39.899s 00:10:04.731 user 1m2.860s 00:10:04.731 sys 0m12.424s 00:10:04.731 06:49:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:04.731 06:49:09 -- common/autotest_common.sh@10 -- # set +x 00:10:04.731 06:49:09 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:04.731 06:49:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:04.731 06:49:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:04.731 06:49:09 -- common/autotest_common.sh@10 -- # set +x 00:10:04.731 ************************************ 00:10:04.732 START TEST nvmf_bdev_io_wait 00:10:04.732 ************************************ 00:10:04.732 06:49:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:04.732 * Looking for test storage... 00:10:04.732 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:04.732 06:49:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:04.732 06:49:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:04.732 06:49:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:04.991 06:49:09 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:04.991 06:49:09 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:04.991 06:49:09 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:04.991 06:49:09 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:04.991 06:49:09 -- scripts/common.sh@335 -- # IFS=.-: 00:10:04.991 06:49:09 -- scripts/common.sh@335 -- # read -ra ver1 00:10:04.991 06:49:09 -- scripts/common.sh@336 -- # IFS=.-: 00:10:04.991 06:49:09 -- scripts/common.sh@336 -- # read -ra ver2 00:10:04.991 06:49:09 -- scripts/common.sh@337 -- # local 'op=<' 00:10:04.991 06:49:09 -- scripts/common.sh@339 -- # ver1_l=2 00:10:04.991 06:49:09 -- scripts/common.sh@340 -- # ver2_l=1 00:10:04.991 06:49:09 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:04.991 06:49:09 -- scripts/common.sh@343 -- # case "$op" in 00:10:04.991 06:49:09 -- scripts/common.sh@344 -- # : 1 00:10:04.991 06:49:09 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:04.991 06:49:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:04.991 06:49:09 -- scripts/common.sh@364 -- # decimal 1 00:10:04.991 06:49:09 -- scripts/common.sh@352 -- # local d=1 00:10:04.991 06:49:09 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:04.991 06:49:09 -- scripts/common.sh@354 -- # echo 1 00:10:04.991 06:49:09 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:04.991 06:49:09 -- scripts/common.sh@365 -- # decimal 2 00:10:04.991 06:49:09 -- scripts/common.sh@352 -- # local d=2 00:10:04.991 06:49:09 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:04.991 06:49:09 -- scripts/common.sh@354 -- # echo 2 00:10:04.991 06:49:09 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:04.991 06:49:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:04.991 06:49:09 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:04.991 06:49:09 -- scripts/common.sh@367 -- # return 0 00:10:04.991 06:49:09 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:04.991 06:49:09 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:04.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.991 --rc genhtml_branch_coverage=1 00:10:04.991 --rc genhtml_function_coverage=1 00:10:04.991 --rc genhtml_legend=1 00:10:04.991 --rc geninfo_all_blocks=1 00:10:04.991 --rc geninfo_unexecuted_blocks=1 00:10:04.991 00:10:04.991 ' 00:10:04.991 06:49:09 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:04.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.991 --rc genhtml_branch_coverage=1 00:10:04.991 --rc genhtml_function_coverage=1 00:10:04.991 --rc genhtml_legend=1 00:10:04.991 --rc geninfo_all_blocks=1 00:10:04.991 --rc geninfo_unexecuted_blocks=1 00:10:04.991 00:10:04.991 ' 00:10:04.991 06:49:09 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:04.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.991 --rc genhtml_branch_coverage=1 00:10:04.991 --rc genhtml_function_coverage=1 00:10:04.991 --rc genhtml_legend=1 00:10:04.991 --rc geninfo_all_blocks=1 00:10:04.991 --rc geninfo_unexecuted_blocks=1 00:10:04.991 00:10:04.991 ' 00:10:04.991 06:49:09 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:04.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.991 --rc genhtml_branch_coverage=1 00:10:04.991 --rc genhtml_function_coverage=1 00:10:04.991 --rc genhtml_legend=1 00:10:04.991 --rc geninfo_all_blocks=1 00:10:04.991 --rc geninfo_unexecuted_blocks=1 00:10:04.991 00:10:04.991 ' 00:10:04.991 06:49:09 -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:04.991 06:49:09 -- nvmf/common.sh@7 -- # uname -s 00:10:04.991 06:49:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:04.991 06:49:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:04.991 06:49:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:04.991 06:49:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:04.991 06:49:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:04.991 06:49:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:04.991 06:49:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:04.991 06:49:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:04.991 06:49:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:04.991 06:49:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:04.991 06:49:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:657f0c9c-3891-4064-9841-3d87a573b6e7 00:10:04.991 06:49:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=657f0c9c-3891-4064-9841-3d87a573b6e7 00:10:04.991 06:49:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:04.991 06:49:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:04.991 06:49:09 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:04.991 06:49:09 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:04.991 06:49:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:04.991 06:49:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:04.991 06:49:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:04.991 06:49:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.991 06:49:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.991 06:49:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.991 06:49:09 -- paths/export.sh@5 -- # export PATH 00:10:04.991 06:49:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.991 06:49:09 -- nvmf/common.sh@46 -- # : 0 00:10:04.991 06:49:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:04.991 06:49:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:04.991 06:49:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:04.991 06:49:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:04.991 06:49:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:04.991 06:49:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:04.991 06:49:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:04.991 06:49:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:04.991 06:49:09 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:04.991 06:49:09 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:04.991 06:49:09 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:10:04.991 06:49:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:04.992 06:49:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:04.992 06:49:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:04.992 06:49:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:04.992 06:49:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:04.992 06:49:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:04.992 06:49:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:04.992 06:49:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:04.992 06:49:09 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:04.992 06:49:09 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:04.992 06:49:09 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:04.992 06:49:09 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:04.992 06:49:09 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:04.992 06:49:09 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:04.992 06:49:09 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:04.992 06:49:09 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:04.992 06:49:09 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:04.992 06:49:09 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:04.992 06:49:09 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:04.992 06:49:09 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:04.992 06:49:09 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:04.992 06:49:09 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:04.992 06:49:09 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:04.992 06:49:09 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:04.992 06:49:09 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:04.992 06:49:09 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:04.992 06:49:09 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:04.992 06:49:09 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:04.992 Cannot find device "nvmf_tgt_br" 00:10:04.992 06:49:09 -- nvmf/common.sh@154 -- # true 00:10:04.992 06:49:09 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:04.992 Cannot find device "nvmf_tgt_br2" 00:10:04.992 06:49:09 -- nvmf/common.sh@155 -- # true 00:10:04.992 06:49:09 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:04.992 06:49:09 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:04.992 Cannot find device "nvmf_tgt_br" 00:10:04.992 06:49:09 -- nvmf/common.sh@157 -- # true 00:10:04.992 06:49:09 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:04.992 Cannot find device "nvmf_tgt_br2" 00:10:04.992 06:49:09 -- nvmf/common.sh@158 -- # true 00:10:04.992 06:49:09 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:04.992 06:49:09 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:04.992 06:49:09 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:04.992 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:04.992 06:49:09 -- nvmf/common.sh@161 -- # true 00:10:04.992 06:49:09 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:05.251 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:05.251 06:49:09 -- nvmf/common.sh@162 -- # true 00:10:05.251 06:49:09 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:05.251 06:49:09 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:05.251 06:49:09 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:05.251 06:49:09 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:05.251 06:49:09 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:05.251 06:49:09 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:05.251 06:49:09 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:05.251 06:49:09 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:05.251 06:49:09 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:05.251 06:49:09 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:05.251 06:49:09 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:05.251 06:49:09 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:05.251 06:49:09 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:05.251 06:49:09 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:05.251 06:49:09 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:05.251 06:49:09 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:05.251 06:49:09 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:05.251 06:49:09 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:05.251 06:49:09 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:05.251 06:49:09 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:05.251 06:49:09 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:05.251 06:49:09 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:05.251 06:49:09 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:05.251 06:49:09 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:05.251 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:05.251 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:10:05.251 00:10:05.251 --- 10.0.0.2 ping statistics --- 00:10:05.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:05.252 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:10:05.252 06:49:09 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:05.252 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:05.252 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:10:05.252 00:10:05.252 --- 10.0.0.3 ping statistics --- 00:10:05.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:05.252 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:10:05.252 06:49:09 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:05.252 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:05.252 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:10:05.252 00:10:05.252 --- 10.0.0.1 ping statistics --- 00:10:05.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:05.252 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:10:05.252 06:49:09 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:05.252 06:49:09 -- nvmf/common.sh@421 -- # return 0 00:10:05.252 06:49:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:05.252 06:49:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:05.252 06:49:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:05.252 06:49:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:05.252 06:49:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:05.252 06:49:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:05.252 06:49:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:05.252 06:49:09 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:10:05.252 06:49:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:05.252 06:49:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:05.252 06:49:09 -- common/autotest_common.sh@10 -- # set +x 00:10:05.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:05.252 06:49:09 -- nvmf/common.sh@469 -- # nvmfpid=73547 00:10:05.252 06:49:09 -- nvmf/common.sh@470 -- # waitforlisten 73547 00:10:05.252 06:49:09 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:10:05.252 06:49:09 -- common/autotest_common.sh@829 -- # '[' -z 73547 ']' 00:10:05.252 06:49:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:05.252 06:49:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:05.252 06:49:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:05.252 06:49:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:05.252 06:49:09 -- common/autotest_common.sh@10 -- # set +x 00:10:05.511 [2024-12-13 06:49:09.786392] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:05.511 [2024-12-13 06:49:09.786498] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:05.511 [2024-12-13 06:49:09.926834] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:05.511 [2024-12-13 06:49:09.962109] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:05.511 [2024-12-13 06:49:09.962544] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:05.511 [2024-12-13 06:49:09.962671] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:05.511 [2024-12-13 06:49:09.962690] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:05.511 [2024-12-13 06:49:09.962788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:05.511 [2024-12-13 06:49:09.962931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:05.511 [2024-12-13 06:49:09.963414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:05.511 [2024-12-13 06:49:09.963445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.770 06:49:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:05.770 06:49:10 -- common/autotest_common.sh@862 -- # return 0 00:10:05.770 06:49:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:05.770 06:49:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:05.770 06:49:10 -- common/autotest_common.sh@10 -- # set +x 00:10:05.770 06:49:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:05.770 06:49:10 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:10:05.770 06:49:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.770 06:49:10 -- common/autotest_common.sh@10 -- # set +x 00:10:05.770 06:49:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.770 06:49:10 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:10:05.770 06:49:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.770 06:49:10 -- common/autotest_common.sh@10 -- # set +x 00:10:05.770 06:49:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.770 06:49:10 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:05.770 06:49:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.770 06:49:10 -- common/autotest_common.sh@10 -- # set +x 00:10:05.770 [2024-12-13 06:49:10.136530] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:05.770 06:49:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.770 06:49:10 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:05.770 06:49:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.770 06:49:10 -- common/autotest_common.sh@10 -- # set +x 00:10:05.770 Malloc0 00:10:05.770 06:49:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.770 06:49:10 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:05.770 06:49:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.770 06:49:10 -- common/autotest_common.sh@10 -- # set +x 00:10:05.770 06:49:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.770 06:49:10 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:05.770 06:49:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.770 06:49:10 -- common/autotest_common.sh@10 -- # set +x 00:10:05.771 06:49:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.771 06:49:10 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:05.771 06:49:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.771 06:49:10 -- common/autotest_common.sh@10 -- # set +x 00:10:05.771 [2024-12-13 06:49:10.194735] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:05.771 06:49:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.771 06:49:10 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=73575 00:10:05.771 06:49:10 -- target/bdev_io_wait.sh@30 -- # READ_PID=73577 00:10:05.771 06:49:10 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:10:05.771 06:49:10 -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:10:05.771 06:49:10 -- nvmf/common.sh@520 -- # config=() 00:10:05.771 06:49:10 -- nvmf/common.sh@520 -- # local subsystem config 00:10:05.771 06:49:10 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:10:05.771 06:49:10 -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:10:05.771 06:49:10 -- nvmf/common.sh@520 -- # config=() 00:10:05.771 06:49:10 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=73579 00:10:05.771 06:49:10 -- nvmf/common.sh@520 -- # local subsystem config 00:10:05.771 06:49:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:10:05.771 06:49:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:10:05.771 { 00:10:05.771 "params": { 00:10:05.771 "name": "Nvme$subsystem", 00:10:05.771 "trtype": "$TEST_TRANSPORT", 00:10:05.771 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:05.771 "adrfam": "ipv4", 00:10:05.771 "trsvcid": "$NVMF_PORT", 00:10:05.771 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:05.771 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:05.771 "hdgst": ${hdgst:-false}, 00:10:05.771 "ddgst": ${ddgst:-false} 00:10:05.771 }, 00:10:05.771 "method": "bdev_nvme_attach_controller" 00:10:05.771 } 00:10:05.771 EOF 00:10:05.771 )") 00:10:05.771 06:49:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:10:05.771 06:49:10 -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:10:05.771 06:49:10 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=73581 00:10:05.771 06:49:10 -- target/bdev_io_wait.sh@35 -- # sync 00:10:05.771 06:49:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:10:05.771 { 00:10:05.771 "params": { 00:10:05.771 "name": "Nvme$subsystem", 00:10:05.771 "trtype": "$TEST_TRANSPORT", 00:10:05.771 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:05.771 "adrfam": "ipv4", 00:10:05.771 "trsvcid": "$NVMF_PORT", 00:10:05.771 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:05.771 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:05.771 "hdgst": ${hdgst:-false}, 00:10:05.771 "ddgst": ${ddgst:-false} 00:10:05.771 }, 00:10:05.771 "method": "bdev_nvme_attach_controller" 00:10:05.771 } 00:10:05.771 EOF 00:10:05.771 )") 00:10:05.771 06:49:10 -- nvmf/common.sh@542 -- # cat 00:10:05.771 06:49:10 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:10:05.771 06:49:10 -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:10:05.771 06:49:10 -- nvmf/common.sh@520 -- # config=() 00:10:05.771 06:49:10 -- nvmf/common.sh@520 -- # local subsystem config 00:10:05.771 06:49:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:10:05.771 06:49:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:10:05.771 { 00:10:05.771 "params": { 00:10:05.771 "name": "Nvme$subsystem", 00:10:05.771 "trtype": "$TEST_TRANSPORT", 00:10:05.771 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:05.771 "adrfam": "ipv4", 00:10:05.771 "trsvcid": "$NVMF_PORT", 00:10:05.771 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:05.771 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:05.771 "hdgst": ${hdgst:-false}, 00:10:05.771 "ddgst": ${ddgst:-false} 00:10:05.771 }, 00:10:05.771 "method": "bdev_nvme_attach_controller" 00:10:05.771 } 00:10:05.771 EOF 00:10:05.771 )") 00:10:05.771 06:49:10 -- nvmf/common.sh@544 -- # jq . 00:10:05.771 06:49:10 -- nvmf/common.sh@542 -- # cat 00:10:05.771 06:49:10 -- nvmf/common.sh@542 -- # cat 00:10:05.771 06:49:10 -- nvmf/common.sh@545 -- # IFS=, 00:10:05.771 06:49:10 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:10:05.771 "params": { 00:10:05.771 "name": "Nvme1", 00:10:05.771 "trtype": "tcp", 00:10:05.771 "traddr": "10.0.0.2", 00:10:05.771 "adrfam": "ipv4", 00:10:05.771 "trsvcid": "4420", 00:10:05.771 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:05.771 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:05.771 "hdgst": false, 00:10:05.771 "ddgst": false 00:10:05.771 }, 00:10:05.771 "method": "bdev_nvme_attach_controller" 00:10:05.771 }' 00:10:05.771 06:49:10 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:10:05.771 06:49:10 -- nvmf/common.sh@520 -- # config=() 00:10:05.771 06:49:10 -- nvmf/common.sh@520 -- # local subsystem config 00:10:05.771 06:49:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:10:05.771 06:49:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:10:05.771 { 00:10:05.771 "params": { 00:10:05.771 "name": "Nvme$subsystem", 00:10:05.771 "trtype": "$TEST_TRANSPORT", 00:10:05.771 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:05.771 "adrfam": "ipv4", 00:10:05.771 "trsvcid": "$NVMF_PORT", 00:10:05.771 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:05.771 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:05.771 "hdgst": ${hdgst:-false}, 00:10:05.771 "ddgst": ${ddgst:-false} 00:10:05.771 }, 00:10:05.771 "method": "bdev_nvme_attach_controller" 00:10:05.771 } 00:10:05.771 EOF 00:10:05.771 )") 00:10:05.771 06:49:10 -- nvmf/common.sh@544 -- # jq . 00:10:05.771 06:49:10 -- nvmf/common.sh@542 -- # cat 00:10:05.771 06:49:10 -- nvmf/common.sh@545 -- # IFS=, 00:10:05.771 06:49:10 -- nvmf/common.sh@544 -- # jq . 00:10:05.771 06:49:10 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:10:05.771 "params": { 00:10:05.771 "name": "Nvme1", 00:10:05.771 "trtype": "tcp", 00:10:05.771 "traddr": "10.0.0.2", 00:10:05.771 "adrfam": "ipv4", 00:10:05.771 "trsvcid": "4420", 00:10:05.771 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:05.771 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:05.771 "hdgst": false, 00:10:05.771 "ddgst": false 00:10:05.771 }, 00:10:05.771 "method": "bdev_nvme_attach_controller" 00:10:05.771 }' 00:10:05.771 06:49:10 -- nvmf/common.sh@545 -- # IFS=, 00:10:05.771 06:49:10 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:10:05.771 "params": { 00:10:05.771 "name": "Nvme1", 00:10:05.771 "trtype": "tcp", 00:10:05.771 "traddr": "10.0.0.2", 00:10:05.771 "adrfam": "ipv4", 00:10:05.771 "trsvcid": "4420", 00:10:05.771 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:05.771 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:05.771 "hdgst": false, 00:10:05.771 "ddgst": false 00:10:05.771 }, 00:10:05.771 "method": "bdev_nvme_attach_controller" 00:10:05.771 }' 00:10:05.771 06:49:10 -- nvmf/common.sh@544 -- # jq . 00:10:05.771 06:49:10 -- nvmf/common.sh@545 -- # IFS=, 00:10:05.771 06:49:10 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:10:05.771 "params": { 00:10:05.771 "name": "Nvme1", 00:10:05.771 "trtype": "tcp", 00:10:05.771 "traddr": "10.0.0.2", 00:10:05.771 "adrfam": "ipv4", 00:10:05.771 "trsvcid": "4420", 00:10:05.771 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:05.771 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:05.771 "hdgst": false, 00:10:05.771 "ddgst": false 00:10:05.771 }, 00:10:05.771 "method": "bdev_nvme_attach_controller" 00:10:05.771 }' 00:10:05.771 [2024-12-13 06:49:10.252732] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:05.771 [2024-12-13 06:49:10.252972] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:10:05.771 [2024-12-13 06:49:10.256943] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:05.771 [2024-12-13 06:49:10.257159] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:10:05.771 06:49:10 -- target/bdev_io_wait.sh@37 -- # wait 73575 00:10:06.031 [2024-12-13 06:49:10.293011] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:06.031 [2024-12-13 06:49:10.293342] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:10:06.031 [2024-12-13 06:49:10.296834] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:06.031 [2024-12-13 06:49:10.297076] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:10:06.031 [2024-12-13 06:49:10.436378] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.031 [2024-12-13 06:49:10.464816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:10:06.031 [2024-12-13 06:49:10.474378] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.031 [2024-12-13 06:49:10.499097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:10:06.031 [2024-12-13 06:49:10.519164] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.031 [2024-12-13 06:49:10.544103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:10:06.290 [2024-12-13 06:49:10.575328] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.290 Running I/O for 1 seconds... 00:10:06.290 [2024-12-13 06:49:10.605201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:10:06.290 Running I/O for 1 seconds... 00:10:06.290 Running I/O for 1 seconds... 00:10:06.290 Running I/O for 1 seconds... 00:10:07.227 00:10:07.227 Latency(us) 00:10:07.227 [2024-12-13T06:49:11.746Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:07.227 [2024-12-13T06:49:11.746Z] Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:10:07.227 Nvme1n1 : 1.02 6336.47 24.75 0.00 0.00 19882.57 6881.28 35031.97 00:10:07.227 [2024-12-13T06:49:11.746Z] =================================================================================================================== 00:10:07.227 [2024-12-13T06:49:11.746Z] Total : 6336.47 24.75 0.00 0.00 19882.57 6881.28 35031.97 00:10:07.227 00:10:07.227 Latency(us) 00:10:07.227 [2024-12-13T06:49:11.746Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:07.227 [2024-12-13T06:49:11.746Z] Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:10:07.227 Nvme1n1 : 1.00 169413.46 661.77 0.00 0.00 752.92 350.02 1206.46 00:10:07.227 [2024-12-13T06:49:11.746Z] =================================================================================================================== 00:10:07.227 [2024-12-13T06:49:11.746Z] Total : 169413.46 661.77 0.00 0.00 752.92 350.02 1206.46 00:10:07.227 00:10:07.227 Latency(us) 00:10:07.227 [2024-12-13T06:49:11.746Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:07.227 [2024-12-13T06:49:11.746Z] Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:10:07.227 Nvme1n1 : 1.01 9599.86 37.50 0.00 0.00 13277.42 7417.48 25856.93 00:10:07.227 [2024-12-13T06:49:11.746Z] =================================================================================================================== 00:10:07.227 [2024-12-13T06:49:11.746Z] Total : 9599.86 37.50 0.00 0.00 13277.42 7417.48 25856.93 00:10:07.227 00:10:07.227 Latency(us) 00:10:07.227 [2024-12-13T06:49:11.746Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:07.227 [2024-12-13T06:49:11.746Z] Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:10:07.227 Nvme1n1 : 1.01 6290.47 24.57 0.00 0.00 20283.21 5689.72 49569.05 00:10:07.227 [2024-12-13T06:49:11.746Z] =================================================================================================================== 00:10:07.227 [2024-12-13T06:49:11.746Z] Total : 6290.47 24.57 0.00 0.00 20283.21 5689.72 49569.05 00:10:07.486 06:49:11 -- target/bdev_io_wait.sh@38 -- # wait 73577 00:10:07.486 06:49:11 -- target/bdev_io_wait.sh@39 -- # wait 73579 00:10:07.486 06:49:11 -- target/bdev_io_wait.sh@40 -- # wait 73581 00:10:07.486 06:49:11 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:07.486 06:49:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.486 06:49:11 -- common/autotest_common.sh@10 -- # set +x 00:10:07.486 06:49:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.486 06:49:11 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:10:07.486 06:49:11 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:10:07.486 06:49:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:07.486 06:49:11 -- nvmf/common.sh@116 -- # sync 00:10:07.486 06:49:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:07.486 06:49:11 -- nvmf/common.sh@119 -- # set +e 00:10:07.486 06:49:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:07.486 06:49:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:07.486 rmmod nvme_tcp 00:10:07.486 rmmod nvme_fabrics 00:10:07.486 rmmod nvme_keyring 00:10:07.486 06:49:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:07.486 06:49:11 -- nvmf/common.sh@123 -- # set -e 00:10:07.486 06:49:11 -- nvmf/common.sh@124 -- # return 0 00:10:07.486 06:49:11 -- nvmf/common.sh@477 -- # '[' -n 73547 ']' 00:10:07.486 06:49:11 -- nvmf/common.sh@478 -- # killprocess 73547 00:10:07.486 06:49:11 -- common/autotest_common.sh@936 -- # '[' -z 73547 ']' 00:10:07.487 06:49:11 -- common/autotest_common.sh@940 -- # kill -0 73547 00:10:07.487 06:49:11 -- common/autotest_common.sh@941 -- # uname 00:10:07.487 06:49:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:07.487 06:49:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73547 00:10:07.746 killing process with pid 73547 00:10:07.746 06:49:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:07.746 06:49:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:07.746 06:49:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73547' 00:10:07.746 06:49:12 -- common/autotest_common.sh@955 -- # kill 73547 00:10:07.746 06:49:12 -- common/autotest_common.sh@960 -- # wait 73547 00:10:07.746 06:49:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:07.746 06:49:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:07.746 06:49:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:07.746 06:49:12 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:07.746 06:49:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:07.746 06:49:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:07.746 06:49:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:07.746 06:49:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:07.746 06:49:12 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:07.746 ************************************ 00:10:07.746 END TEST nvmf_bdev_io_wait 00:10:07.746 ************************************ 00:10:07.746 00:10:07.746 real 0m3.044s 00:10:07.746 user 0m13.012s 00:10:07.746 sys 0m1.902s 00:10:07.746 06:49:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:07.746 06:49:12 -- common/autotest_common.sh@10 -- # set +x 00:10:07.746 06:49:12 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:07.746 06:49:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:07.746 06:49:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:07.746 06:49:12 -- common/autotest_common.sh@10 -- # set +x 00:10:07.746 ************************************ 00:10:07.746 START TEST nvmf_queue_depth 00:10:07.746 ************************************ 00:10:07.746 06:49:12 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:08.005 * Looking for test storage... 00:10:08.005 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:08.005 06:49:12 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:08.005 06:49:12 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:08.005 06:49:12 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:08.005 06:49:12 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:08.005 06:49:12 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:08.005 06:49:12 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:08.005 06:49:12 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:08.005 06:49:12 -- scripts/common.sh@335 -- # IFS=.-: 00:10:08.005 06:49:12 -- scripts/common.sh@335 -- # read -ra ver1 00:10:08.005 06:49:12 -- scripts/common.sh@336 -- # IFS=.-: 00:10:08.005 06:49:12 -- scripts/common.sh@336 -- # read -ra ver2 00:10:08.005 06:49:12 -- scripts/common.sh@337 -- # local 'op=<' 00:10:08.005 06:49:12 -- scripts/common.sh@339 -- # ver1_l=2 00:10:08.005 06:49:12 -- scripts/common.sh@340 -- # ver2_l=1 00:10:08.005 06:49:12 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:08.005 06:49:12 -- scripts/common.sh@343 -- # case "$op" in 00:10:08.005 06:49:12 -- scripts/common.sh@344 -- # : 1 00:10:08.005 06:49:12 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:08.005 06:49:12 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:08.005 06:49:12 -- scripts/common.sh@364 -- # decimal 1 00:10:08.005 06:49:12 -- scripts/common.sh@352 -- # local d=1 00:10:08.005 06:49:12 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:08.005 06:49:12 -- scripts/common.sh@354 -- # echo 1 00:10:08.005 06:49:12 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:08.005 06:49:12 -- scripts/common.sh@365 -- # decimal 2 00:10:08.005 06:49:12 -- scripts/common.sh@352 -- # local d=2 00:10:08.005 06:49:12 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:08.005 06:49:12 -- scripts/common.sh@354 -- # echo 2 00:10:08.005 06:49:12 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:08.005 06:49:12 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:08.005 06:49:12 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:08.005 06:49:12 -- scripts/common.sh@367 -- # return 0 00:10:08.005 06:49:12 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:08.005 06:49:12 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:08.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.005 --rc genhtml_branch_coverage=1 00:10:08.005 --rc genhtml_function_coverage=1 00:10:08.005 --rc genhtml_legend=1 00:10:08.005 --rc geninfo_all_blocks=1 00:10:08.005 --rc geninfo_unexecuted_blocks=1 00:10:08.005 00:10:08.005 ' 00:10:08.005 06:49:12 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:08.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.006 --rc genhtml_branch_coverage=1 00:10:08.006 --rc genhtml_function_coverage=1 00:10:08.006 --rc genhtml_legend=1 00:10:08.006 --rc geninfo_all_blocks=1 00:10:08.006 --rc geninfo_unexecuted_blocks=1 00:10:08.006 00:10:08.006 ' 00:10:08.006 06:49:12 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:08.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.006 --rc genhtml_branch_coverage=1 00:10:08.006 --rc genhtml_function_coverage=1 00:10:08.006 --rc genhtml_legend=1 00:10:08.006 --rc geninfo_all_blocks=1 00:10:08.006 --rc geninfo_unexecuted_blocks=1 00:10:08.006 00:10:08.006 ' 00:10:08.006 06:49:12 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:08.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.006 --rc genhtml_branch_coverage=1 00:10:08.006 --rc genhtml_function_coverage=1 00:10:08.006 --rc genhtml_legend=1 00:10:08.006 --rc geninfo_all_blocks=1 00:10:08.006 --rc geninfo_unexecuted_blocks=1 00:10:08.006 00:10:08.006 ' 00:10:08.006 06:49:12 -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:08.006 06:49:12 -- nvmf/common.sh@7 -- # uname -s 00:10:08.006 06:49:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:08.006 06:49:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:08.006 06:49:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:08.006 06:49:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:08.006 06:49:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:08.006 06:49:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:08.006 06:49:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:08.006 06:49:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:08.006 06:49:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:08.006 06:49:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:08.006 06:49:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:657f0c9c-3891-4064-9841-3d87a573b6e7 00:10:08.006 06:49:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=657f0c9c-3891-4064-9841-3d87a573b6e7 00:10:08.006 06:49:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:08.006 06:49:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:08.006 06:49:12 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:08.006 06:49:12 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:08.006 06:49:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:08.006 06:49:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:08.006 06:49:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:08.006 06:49:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.006 06:49:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.006 06:49:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.006 06:49:12 -- paths/export.sh@5 -- # export PATH 00:10:08.006 06:49:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.006 06:49:12 -- nvmf/common.sh@46 -- # : 0 00:10:08.006 06:49:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:08.006 06:49:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:08.006 06:49:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:08.006 06:49:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:08.006 06:49:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:08.006 06:49:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:08.006 06:49:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:08.006 06:49:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:08.006 06:49:12 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:10:08.006 06:49:12 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:10:08.006 06:49:12 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:08.006 06:49:12 -- target/queue_depth.sh@19 -- # nvmftestinit 00:10:08.006 06:49:12 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:08.006 06:49:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:08.006 06:49:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:08.006 06:49:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:08.006 06:49:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:08.006 06:49:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:08.006 06:49:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:08.006 06:49:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:08.006 06:49:12 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:08.006 06:49:12 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:08.006 06:49:12 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:08.006 06:49:12 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:08.006 06:49:12 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:08.006 06:49:12 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:08.006 06:49:12 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:08.006 06:49:12 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:08.006 06:49:12 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:08.006 06:49:12 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:08.006 06:49:12 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:08.006 06:49:12 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:08.006 06:49:12 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:08.006 06:49:12 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:08.006 06:49:12 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:08.006 06:49:12 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:08.006 06:49:12 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:08.006 06:49:12 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:08.006 06:49:12 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:08.006 06:49:12 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:08.006 Cannot find device "nvmf_tgt_br" 00:10:08.006 06:49:12 -- nvmf/common.sh@154 -- # true 00:10:08.006 06:49:12 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:08.006 Cannot find device "nvmf_tgt_br2" 00:10:08.006 06:49:12 -- nvmf/common.sh@155 -- # true 00:10:08.006 06:49:12 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:08.006 06:49:12 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:08.006 Cannot find device "nvmf_tgt_br" 00:10:08.266 06:49:12 -- nvmf/common.sh@157 -- # true 00:10:08.266 06:49:12 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:08.266 Cannot find device "nvmf_tgt_br2" 00:10:08.266 06:49:12 -- nvmf/common.sh@158 -- # true 00:10:08.266 06:49:12 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:08.266 06:49:12 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:08.266 06:49:12 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:08.266 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:08.266 06:49:12 -- nvmf/common.sh@161 -- # true 00:10:08.266 06:49:12 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:08.266 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:08.266 06:49:12 -- nvmf/common.sh@162 -- # true 00:10:08.266 06:49:12 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:08.266 06:49:12 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:08.266 06:49:12 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:08.266 06:49:12 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:08.266 06:49:12 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:08.266 06:49:12 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:08.266 06:49:12 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:08.266 06:49:12 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:08.266 06:49:12 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:08.266 06:49:12 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:08.266 06:49:12 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:08.266 06:49:12 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:08.266 06:49:12 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:08.266 06:49:12 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:08.266 06:49:12 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:08.266 06:49:12 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:08.266 06:49:12 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:08.266 06:49:12 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:08.266 06:49:12 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:08.266 06:49:12 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:08.266 06:49:12 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:08.266 06:49:12 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:08.266 06:49:12 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:08.266 06:49:12 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:08.266 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:08.266 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:10:08.266 00:10:08.266 --- 10.0.0.2 ping statistics --- 00:10:08.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.266 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:10:08.266 06:49:12 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:08.266 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:08.266 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:10:08.266 00:10:08.266 --- 10.0.0.3 ping statistics --- 00:10:08.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.266 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:10:08.266 06:49:12 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:08.266 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:08.266 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:10:08.266 00:10:08.266 --- 10.0.0.1 ping statistics --- 00:10:08.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.266 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:10:08.525 06:49:12 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:08.525 06:49:12 -- nvmf/common.sh@421 -- # return 0 00:10:08.525 06:49:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:08.525 06:49:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:08.525 06:49:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:08.525 06:49:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:08.525 06:49:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:08.525 06:49:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:08.525 06:49:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:08.525 06:49:12 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:10:08.525 06:49:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:08.525 06:49:12 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:08.525 06:49:12 -- common/autotest_common.sh@10 -- # set +x 00:10:08.526 06:49:12 -- nvmf/common.sh@469 -- # nvmfpid=73790 00:10:08.526 06:49:12 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:08.526 06:49:12 -- nvmf/common.sh@470 -- # waitforlisten 73790 00:10:08.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:08.526 06:49:12 -- common/autotest_common.sh@829 -- # '[' -z 73790 ']' 00:10:08.526 06:49:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:08.526 06:49:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:08.526 06:49:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:08.526 06:49:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:08.526 06:49:12 -- common/autotest_common.sh@10 -- # set +x 00:10:08.526 [2024-12-13 06:49:12.863995] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:08.526 [2024-12-13 06:49:12.864100] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:08.526 [2024-12-13 06:49:13.006109] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.526 [2024-12-13 06:49:13.037838] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:08.526 [2024-12-13 06:49:13.038239] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:08.526 [2024-12-13 06:49:13.038261] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:08.526 [2024-12-13 06:49:13.038270] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:08.526 [2024-12-13 06:49:13.038298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:09.462 06:49:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:09.462 06:49:13 -- common/autotest_common.sh@862 -- # return 0 00:10:09.462 06:49:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:09.462 06:49:13 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:09.462 06:49:13 -- common/autotest_common.sh@10 -- # set +x 00:10:09.462 06:49:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:09.462 06:49:13 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:09.462 06:49:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.462 06:49:13 -- common/autotest_common.sh@10 -- # set +x 00:10:09.462 [2024-12-13 06:49:13.923344] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:09.462 06:49:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.462 06:49:13 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:09.462 06:49:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.462 06:49:13 -- common/autotest_common.sh@10 -- # set +x 00:10:09.462 Malloc0 00:10:09.462 06:49:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.462 06:49:13 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:09.462 06:49:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.462 06:49:13 -- common/autotest_common.sh@10 -- # set +x 00:10:09.462 06:49:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.462 06:49:13 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:09.462 06:49:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.462 06:49:13 -- common/autotest_common.sh@10 -- # set +x 00:10:09.720 06:49:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.720 06:49:13 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:09.720 06:49:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.720 06:49:13 -- common/autotest_common.sh@10 -- # set +x 00:10:09.720 [2024-12-13 06:49:13.989740] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:09.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:09.720 06:49:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.720 06:49:13 -- target/queue_depth.sh@30 -- # bdevperf_pid=73823 00:10:09.720 06:49:13 -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:09.720 06:49:13 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:09.720 06:49:13 -- target/queue_depth.sh@33 -- # waitforlisten 73823 /var/tmp/bdevperf.sock 00:10:09.720 06:49:13 -- common/autotest_common.sh@829 -- # '[' -z 73823 ']' 00:10:09.720 06:49:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:09.720 06:49:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:09.720 06:49:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:09.720 06:49:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:09.720 06:49:13 -- common/autotest_common.sh@10 -- # set +x 00:10:09.720 [2024-12-13 06:49:14.037427] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:09.720 [2024-12-13 06:49:14.037689] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73823 ] 00:10:09.720 [2024-12-13 06:49:14.172814] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.720 [2024-12-13 06:49:14.209660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.979 06:49:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:09.979 06:49:14 -- common/autotest_common.sh@862 -- # return 0 00:10:09.979 06:49:14 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:10:09.979 06:49:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.979 06:49:14 -- common/autotest_common.sh@10 -- # set +x 00:10:09.979 NVMe0n1 00:10:09.979 06:49:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.979 06:49:14 -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:09.979 Running I/O for 10 seconds... 00:10:22.216 00:10:22.216 Latency(us) 00:10:22.216 [2024-12-13T06:49:26.735Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:22.216 [2024-12-13T06:49:26.735Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:22.216 Verification LBA range: start 0x0 length 0x4000 00:10:22.216 NVMe0n1 : 10.06 15215.57 59.44 0.00 0.00 67050.21 13702.98 57433.37 00:10:22.216 [2024-12-13T06:49:26.735Z] =================================================================================================================== 00:10:22.216 [2024-12-13T06:49:26.735Z] Total : 15215.57 59.44 0.00 0.00 67050.21 13702.98 57433.37 00:10:22.216 0 00:10:22.216 06:49:24 -- target/queue_depth.sh@39 -- # killprocess 73823 00:10:22.216 06:49:24 -- common/autotest_common.sh@936 -- # '[' -z 73823 ']' 00:10:22.216 06:49:24 -- common/autotest_common.sh@940 -- # kill -0 73823 00:10:22.216 06:49:24 -- common/autotest_common.sh@941 -- # uname 00:10:22.216 06:49:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:22.216 06:49:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73823 00:10:22.216 killing process with pid 73823 00:10:22.216 Received shutdown signal, test time was about 10.000000 seconds 00:10:22.216 00:10:22.216 Latency(us) 00:10:22.216 [2024-12-13T06:49:26.735Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:22.216 [2024-12-13T06:49:26.735Z] =================================================================================================================== 00:10:22.216 [2024-12-13T06:49:26.735Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:22.216 06:49:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:22.216 06:49:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:22.216 06:49:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73823' 00:10:22.216 06:49:24 -- common/autotest_common.sh@955 -- # kill 73823 00:10:22.216 06:49:24 -- common/autotest_common.sh@960 -- # wait 73823 00:10:22.216 06:49:24 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:22.216 06:49:24 -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:22.216 06:49:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:22.216 06:49:24 -- nvmf/common.sh@116 -- # sync 00:10:22.216 06:49:24 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:22.216 06:49:24 -- nvmf/common.sh@119 -- # set +e 00:10:22.216 06:49:24 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:22.216 06:49:24 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:22.216 rmmod nvme_tcp 00:10:22.216 rmmod nvme_fabrics 00:10:22.216 rmmod nvme_keyring 00:10:22.216 06:49:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:22.216 06:49:24 -- nvmf/common.sh@123 -- # set -e 00:10:22.216 06:49:24 -- nvmf/common.sh@124 -- # return 0 00:10:22.216 06:49:24 -- nvmf/common.sh@477 -- # '[' -n 73790 ']' 00:10:22.216 06:49:24 -- nvmf/common.sh@478 -- # killprocess 73790 00:10:22.216 06:49:24 -- common/autotest_common.sh@936 -- # '[' -z 73790 ']' 00:10:22.216 06:49:24 -- common/autotest_common.sh@940 -- # kill -0 73790 00:10:22.216 06:49:24 -- common/autotest_common.sh@941 -- # uname 00:10:22.216 06:49:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:22.216 06:49:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73790 00:10:22.216 killing process with pid 73790 00:10:22.216 06:49:24 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:10:22.216 06:49:24 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:10:22.216 06:49:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73790' 00:10:22.216 06:49:24 -- common/autotest_common.sh@955 -- # kill 73790 00:10:22.216 06:49:24 -- common/autotest_common.sh@960 -- # wait 73790 00:10:22.216 06:49:25 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:22.216 06:49:25 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:22.216 06:49:25 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:22.216 06:49:25 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:22.216 06:49:25 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:22.216 06:49:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:22.217 06:49:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:22.217 06:49:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:22.217 06:49:25 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:22.217 00:10:22.217 real 0m12.796s 00:10:22.217 user 0m21.998s 00:10:22.217 sys 0m1.873s 00:10:22.217 06:49:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:22.217 ************************************ 00:10:22.217 END TEST nvmf_queue_depth 00:10:22.217 ************************************ 00:10:22.217 06:49:25 -- common/autotest_common.sh@10 -- # set +x 00:10:22.217 06:49:25 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:22.217 06:49:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:22.217 06:49:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:22.217 06:49:25 -- common/autotest_common.sh@10 -- # set +x 00:10:22.217 ************************************ 00:10:22.217 START TEST nvmf_multipath 00:10:22.217 ************************************ 00:10:22.217 06:49:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:22.217 * Looking for test storage... 00:10:22.217 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:22.217 06:49:25 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:22.217 06:49:25 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:22.217 06:49:25 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:22.217 06:49:25 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:22.217 06:49:25 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:22.217 06:49:25 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:22.217 06:49:25 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:22.217 06:49:25 -- scripts/common.sh@335 -- # IFS=.-: 00:10:22.217 06:49:25 -- scripts/common.sh@335 -- # read -ra ver1 00:10:22.217 06:49:25 -- scripts/common.sh@336 -- # IFS=.-: 00:10:22.217 06:49:25 -- scripts/common.sh@336 -- # read -ra ver2 00:10:22.217 06:49:25 -- scripts/common.sh@337 -- # local 'op=<' 00:10:22.217 06:49:25 -- scripts/common.sh@339 -- # ver1_l=2 00:10:22.217 06:49:25 -- scripts/common.sh@340 -- # ver2_l=1 00:10:22.217 06:49:25 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:22.217 06:49:25 -- scripts/common.sh@343 -- # case "$op" in 00:10:22.217 06:49:25 -- scripts/common.sh@344 -- # : 1 00:10:22.217 06:49:25 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:22.217 06:49:25 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:22.217 06:49:25 -- scripts/common.sh@364 -- # decimal 1 00:10:22.217 06:49:25 -- scripts/common.sh@352 -- # local d=1 00:10:22.217 06:49:25 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:22.217 06:49:25 -- scripts/common.sh@354 -- # echo 1 00:10:22.217 06:49:25 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:22.217 06:49:25 -- scripts/common.sh@365 -- # decimal 2 00:10:22.217 06:49:25 -- scripts/common.sh@352 -- # local d=2 00:10:22.217 06:49:25 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:22.217 06:49:25 -- scripts/common.sh@354 -- # echo 2 00:10:22.217 06:49:25 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:22.217 06:49:25 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:22.217 06:49:25 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:22.217 06:49:25 -- scripts/common.sh@367 -- # return 0 00:10:22.217 06:49:25 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:22.217 06:49:25 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:22.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.217 --rc genhtml_branch_coverage=1 00:10:22.217 --rc genhtml_function_coverage=1 00:10:22.217 --rc genhtml_legend=1 00:10:22.217 --rc geninfo_all_blocks=1 00:10:22.217 --rc geninfo_unexecuted_blocks=1 00:10:22.217 00:10:22.217 ' 00:10:22.217 06:49:25 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:22.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.217 --rc genhtml_branch_coverage=1 00:10:22.217 --rc genhtml_function_coverage=1 00:10:22.217 --rc genhtml_legend=1 00:10:22.217 --rc geninfo_all_blocks=1 00:10:22.217 --rc geninfo_unexecuted_blocks=1 00:10:22.217 00:10:22.217 ' 00:10:22.217 06:49:25 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:22.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.217 --rc genhtml_branch_coverage=1 00:10:22.217 --rc genhtml_function_coverage=1 00:10:22.217 --rc genhtml_legend=1 00:10:22.217 --rc geninfo_all_blocks=1 00:10:22.217 --rc geninfo_unexecuted_blocks=1 00:10:22.217 00:10:22.217 ' 00:10:22.217 06:49:25 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:22.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.217 --rc genhtml_branch_coverage=1 00:10:22.217 --rc genhtml_function_coverage=1 00:10:22.217 --rc genhtml_legend=1 00:10:22.217 --rc geninfo_all_blocks=1 00:10:22.217 --rc geninfo_unexecuted_blocks=1 00:10:22.217 00:10:22.217 ' 00:10:22.217 06:49:25 -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:22.217 06:49:25 -- nvmf/common.sh@7 -- # uname -s 00:10:22.217 06:49:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:22.217 06:49:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:22.217 06:49:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:22.217 06:49:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:22.217 06:49:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:22.217 06:49:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:22.217 06:49:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:22.217 06:49:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:22.217 06:49:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:22.217 06:49:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:22.217 06:49:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:657f0c9c-3891-4064-9841-3d87a573b6e7 00:10:22.217 06:49:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=657f0c9c-3891-4064-9841-3d87a573b6e7 00:10:22.217 06:49:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:22.217 06:49:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:22.217 06:49:25 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:22.217 06:49:25 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:22.217 06:49:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:22.217 06:49:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:22.217 06:49:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:22.217 06:49:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.217 06:49:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.217 06:49:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.217 06:49:25 -- paths/export.sh@5 -- # export PATH 00:10:22.217 06:49:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.217 06:49:25 -- nvmf/common.sh@46 -- # : 0 00:10:22.217 06:49:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:22.217 06:49:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:22.217 06:49:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:22.217 06:49:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:22.217 06:49:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:22.217 06:49:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:22.217 06:49:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:22.217 06:49:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:22.217 06:49:25 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:22.217 06:49:25 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:22.217 06:49:25 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:22.217 06:49:25 -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:22.217 06:49:25 -- target/multipath.sh@43 -- # nvmftestinit 00:10:22.217 06:49:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:22.217 06:49:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:22.217 06:49:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:22.217 06:49:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:22.217 06:49:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:22.217 06:49:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:22.217 06:49:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:22.217 06:49:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:22.217 06:49:25 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:22.217 06:49:25 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:22.217 06:49:25 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:22.217 06:49:25 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:22.217 06:49:25 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:22.217 06:49:25 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:22.217 06:49:25 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:22.217 06:49:25 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:22.217 06:49:25 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:22.217 06:49:25 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:22.217 06:49:25 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:22.217 06:49:25 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:22.217 06:49:25 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:22.217 06:49:25 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:22.217 06:49:25 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:22.217 06:49:25 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:22.217 06:49:25 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:22.217 06:49:25 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:22.218 06:49:25 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:22.218 06:49:25 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:22.218 Cannot find device "nvmf_tgt_br" 00:10:22.218 06:49:25 -- nvmf/common.sh@154 -- # true 00:10:22.218 06:49:25 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:22.218 Cannot find device "nvmf_tgt_br2" 00:10:22.218 06:49:25 -- nvmf/common.sh@155 -- # true 00:10:22.218 06:49:25 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:22.218 06:49:25 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:22.218 Cannot find device "nvmf_tgt_br" 00:10:22.218 06:49:25 -- nvmf/common.sh@157 -- # true 00:10:22.218 06:49:25 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:22.218 Cannot find device "nvmf_tgt_br2" 00:10:22.218 06:49:25 -- nvmf/common.sh@158 -- # true 00:10:22.218 06:49:25 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:22.218 06:49:25 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:22.218 06:49:25 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:22.218 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:22.218 06:49:25 -- nvmf/common.sh@161 -- # true 00:10:22.218 06:49:25 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:22.218 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:22.218 06:49:25 -- nvmf/common.sh@162 -- # true 00:10:22.218 06:49:25 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:22.218 06:49:25 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:22.218 06:49:25 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:22.218 06:49:25 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:22.218 06:49:25 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:22.218 06:49:25 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:22.218 06:49:25 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:22.218 06:49:25 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:22.218 06:49:25 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:22.218 06:49:25 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:22.218 06:49:25 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:22.218 06:49:25 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:22.218 06:49:25 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:22.218 06:49:25 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:22.218 06:49:25 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:22.218 06:49:25 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:22.218 06:49:25 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:22.218 06:49:25 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:22.218 06:49:25 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:22.218 06:49:25 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:22.218 06:49:25 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:22.218 06:49:25 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:22.218 06:49:25 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:22.218 06:49:25 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:22.218 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:22.218 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:10:22.218 00:10:22.218 --- 10.0.0.2 ping statistics --- 00:10:22.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:22.218 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:10:22.218 06:49:25 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:22.218 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:22.218 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:10:22.218 00:10:22.218 --- 10.0.0.3 ping statistics --- 00:10:22.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:22.218 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:10:22.218 06:49:25 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:22.218 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:22.218 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:10:22.218 00:10:22.218 --- 10.0.0.1 ping statistics --- 00:10:22.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:22.218 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:10:22.218 06:49:25 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:22.218 06:49:25 -- nvmf/common.sh@421 -- # return 0 00:10:22.218 06:49:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:22.218 06:49:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:22.218 06:49:25 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:22.218 06:49:25 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:22.218 06:49:25 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:22.218 06:49:25 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:22.218 06:49:25 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:22.218 06:49:25 -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:10:22.218 06:49:25 -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:10:22.218 06:49:25 -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:10:22.218 06:49:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:22.218 06:49:25 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:22.218 06:49:25 -- common/autotest_common.sh@10 -- # set +x 00:10:22.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:22.218 06:49:25 -- nvmf/common.sh@469 -- # nvmfpid=74140 00:10:22.218 06:49:25 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:22.218 06:49:25 -- nvmf/common.sh@470 -- # waitforlisten 74140 00:10:22.218 06:49:25 -- common/autotest_common.sh@829 -- # '[' -z 74140 ']' 00:10:22.218 06:49:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:22.218 06:49:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:22.218 06:49:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:22.218 06:49:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:22.218 06:49:25 -- common/autotest_common.sh@10 -- # set +x 00:10:22.218 [2024-12-13 06:49:25.728494] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:22.218 [2024-12-13 06:49:25.728593] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:22.218 [2024-12-13 06:49:25.873217] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:22.218 [2024-12-13 06:49:25.915372] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:22.218 [2024-12-13 06:49:25.915799] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:22.218 [2024-12-13 06:49:25.915963] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:22.218 [2024-12-13 06:49:25.916153] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:22.218 [2024-12-13 06:49:25.916411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:22.218 [2024-12-13 06:49:25.916475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:22.218 [2024-12-13 06:49:25.916562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:22.218 [2024-12-13 06:49:25.916570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.477 06:49:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:22.477 06:49:26 -- common/autotest_common.sh@862 -- # return 0 00:10:22.477 06:49:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:22.477 06:49:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:22.477 06:49:26 -- common/autotest_common.sh@10 -- # set +x 00:10:22.477 06:49:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:22.477 06:49:26 -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:22.735 [2024-12-13 06:49:27.013016] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:22.735 06:49:27 -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:10:22.994 Malloc0 00:10:22.994 06:49:27 -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:10:22.994 06:49:27 -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:23.254 06:49:27 -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:23.512 [2024-12-13 06:49:27.952458] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:23.512 06:49:27 -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:23.771 [2024-12-13 06:49:28.220708] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:23.771 06:49:28 -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:657f0c9c-3891-4064-9841-3d87a573b6e7 --hostid=657f0c9c-3891-4064-9841-3d87a573b6e7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:10:24.030 06:49:28 -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:657f0c9c-3891-4064-9841-3d87a573b6e7 --hostid=657f0c9c-3891-4064-9841-3d87a573b6e7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:10:24.030 06:49:28 -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:10:24.030 06:49:28 -- common/autotest_common.sh@1187 -- # local i=0 00:10:24.030 06:49:28 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:10:24.030 06:49:28 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:10:24.030 06:49:28 -- common/autotest_common.sh@1194 -- # sleep 2 00:10:26.564 06:49:30 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:10:26.564 06:49:30 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:10:26.564 06:49:30 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:10:26.564 06:49:30 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:10:26.564 06:49:30 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:10:26.564 06:49:30 -- common/autotest_common.sh@1197 -- # return 0 00:10:26.564 06:49:30 -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:10:26.564 06:49:30 -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:10:26.564 06:49:30 -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:10:26.564 06:49:30 -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:26.564 06:49:30 -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:10:26.564 06:49:30 -- target/multipath.sh@38 -- # echo nvme-subsys0 00:10:26.564 06:49:30 -- target/multipath.sh@38 -- # return 0 00:10:26.564 06:49:30 -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:10:26.564 06:49:30 -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:10:26.564 06:49:30 -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:10:26.564 06:49:30 -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:10:26.564 06:49:30 -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:10:26.564 06:49:30 -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:10:26.564 06:49:30 -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:10:26.564 06:49:30 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:10:26.564 06:49:30 -- target/multipath.sh@22 -- # local timeout=20 00:10:26.564 06:49:30 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:26.564 06:49:30 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:26.564 06:49:30 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:26.564 06:49:30 -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:10:26.564 06:49:30 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:10:26.564 06:49:30 -- target/multipath.sh@22 -- # local timeout=20 00:10:26.564 06:49:30 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:26.564 06:49:30 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:26.564 06:49:30 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:26.564 06:49:30 -- target/multipath.sh@85 -- # echo numa 00:10:26.564 06:49:30 -- target/multipath.sh@88 -- # fio_pid=74229 00:10:26.564 06:49:30 -- target/multipath.sh@90 -- # sleep 1 00:10:26.564 06:49:30 -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:10:26.564 [global] 00:10:26.564 thread=1 00:10:26.564 invalidate=1 00:10:26.564 rw=randrw 00:10:26.564 time_based=1 00:10:26.564 runtime=6 00:10:26.564 ioengine=libaio 00:10:26.564 direct=1 00:10:26.564 bs=4096 00:10:26.564 iodepth=128 00:10:26.564 norandommap=0 00:10:26.564 numjobs=1 00:10:26.564 00:10:26.564 verify_dump=1 00:10:26.564 verify_backlog=512 00:10:26.564 verify_state_save=0 00:10:26.564 do_verify=1 00:10:26.564 verify=crc32c-intel 00:10:26.564 [job0] 00:10:26.564 filename=/dev/nvme0n1 00:10:26.564 Could not set queue depth (nvme0n1) 00:10:26.564 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:26.564 fio-3.35 00:10:26.564 Starting 1 thread 00:10:27.131 06:49:31 -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:10:27.389 06:49:31 -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:10:27.648 06:49:32 -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:10:27.648 06:49:32 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:10:27.648 06:49:32 -- target/multipath.sh@22 -- # local timeout=20 00:10:27.648 06:49:32 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:27.648 06:49:32 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:27.648 06:49:32 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:27.648 06:49:32 -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:10:27.648 06:49:32 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:10:27.648 06:49:32 -- target/multipath.sh@22 -- # local timeout=20 00:10:27.648 06:49:32 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:27.648 06:49:32 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:27.648 06:49:32 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:27.648 06:49:32 -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:10:27.907 06:49:32 -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:10:28.166 06:49:32 -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:10:28.166 06:49:32 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:10:28.166 06:49:32 -- target/multipath.sh@22 -- # local timeout=20 00:10:28.166 06:49:32 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:28.166 06:49:32 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:28.166 06:49:32 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:28.166 06:49:32 -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:10:28.166 06:49:32 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:10:28.166 06:49:32 -- target/multipath.sh@22 -- # local timeout=20 00:10:28.166 06:49:32 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:28.166 06:49:32 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:28.166 06:49:32 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:28.166 06:49:32 -- target/multipath.sh@104 -- # wait 74229 00:10:32.364 00:10:32.364 job0: (groupid=0, jobs=1): err= 0: pid=74256: Fri Dec 13 06:49:36 2024 00:10:32.364 read: IOPS=10.9k, BW=42.6MiB/s (44.6MB/s)(256MiB/6002msec) 00:10:32.364 slat (usec): min=3, max=7634, avg=53.49, stdev=228.14 00:10:32.364 clat (usec): min=1121, max=14856, avg=7980.31, stdev=1462.08 00:10:32.364 lat (usec): min=1134, max=15322, avg=8033.80, stdev=1466.51 00:10:32.364 clat percentiles (usec): 00:10:32.364 | 1.00th=[ 4228], 5.00th=[ 5997], 10.00th=[ 6652], 20.00th=[ 7111], 00:10:32.364 | 30.00th=[ 7373], 40.00th=[ 7635], 50.00th=[ 7832], 60.00th=[ 8094], 00:10:32.364 | 70.00th=[ 8356], 80.00th=[ 8586], 90.00th=[ 9372], 95.00th=[11338], 00:10:32.364 | 99.00th=[12649], 99.50th=[13042], 99.90th=[13698], 99.95th=[13960], 00:10:32.364 | 99.99th=[14877] 00:10:32.364 bw ( KiB/s): min=12263, max=29016, per=51.83%, avg=22597.73, stdev=5004.22, samples=11 00:10:32.364 iops : min= 3065, max= 7254, avg=5649.36, stdev=1251.21, samples=11 00:10:32.364 write: IOPS=6336, BW=24.8MiB/s (26.0MB/s)(133MiB/5372msec); 0 zone resets 00:10:32.364 slat (usec): min=13, max=2081, avg=63.49, stdev=156.95 00:10:32.364 clat (usec): min=1246, max=14399, avg=7036.84, stdev=1288.57 00:10:32.364 lat (usec): min=1267, max=14444, avg=7100.33, stdev=1293.82 00:10:32.364 clat percentiles (usec): 00:10:32.364 | 1.00th=[ 3195], 5.00th=[ 4228], 10.00th=[ 5342], 20.00th=[ 6521], 00:10:32.364 | 30.00th=[ 6783], 40.00th=[ 7046], 50.00th=[ 7177], 60.00th=[ 7373], 00:10:32.364 | 70.00th=[ 7570], 80.00th=[ 7767], 90.00th=[ 8094], 95.00th=[ 8455], 00:10:32.364 | 99.00th=[11076], 99.50th=[11600], 99.90th=[12387], 99.95th=[12649], 00:10:32.364 | 99.99th=[13698] 00:10:32.364 bw ( KiB/s): min=12758, max=28528, per=89.24%, avg=22619.55, stdev=4701.83, samples=11 00:10:32.364 iops : min= 3189, max= 7132, avg=5654.73, stdev=1175.46, samples=11 00:10:32.364 lat (msec) : 2=0.02%, 4=1.89%, 10=92.25%, 20=5.84% 00:10:32.364 cpu : usr=5.51%, sys=21.64%, ctx=5660, majf=0, minf=90 00:10:32.364 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:32.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.364 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:32.364 issued rwts: total=65414,34039,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:32.364 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:32.364 00:10:32.364 Run status group 0 (all jobs): 00:10:32.364 READ: bw=42.6MiB/s (44.6MB/s), 42.6MiB/s-42.6MiB/s (44.6MB/s-44.6MB/s), io=256MiB (268MB), run=6002-6002msec 00:10:32.364 WRITE: bw=24.8MiB/s (26.0MB/s), 24.8MiB/s-24.8MiB/s (26.0MB/s-26.0MB/s), io=133MiB (139MB), run=5372-5372msec 00:10:32.364 00:10:32.364 Disk stats (read/write): 00:10:32.364 nvme0n1: ios=64314/33527, merge=0/0, ticks=490984/221523, in_queue=712507, util=98.60% 00:10:32.364 06:49:36 -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:10:32.931 06:49:37 -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:10:32.931 06:49:37 -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:10:32.931 06:49:37 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:10:32.931 06:49:37 -- target/multipath.sh@22 -- # local timeout=20 00:10:32.931 06:49:37 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:32.931 06:49:37 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:32.931 06:49:37 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:32.931 06:49:37 -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:10:32.931 06:49:37 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:10:32.931 06:49:37 -- target/multipath.sh@22 -- # local timeout=20 00:10:32.931 06:49:37 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:32.931 06:49:37 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:32.931 06:49:37 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:32.931 06:49:37 -- target/multipath.sh@113 -- # echo round-robin 00:10:32.931 06:49:37 -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:10:32.931 06:49:37 -- target/multipath.sh@116 -- # fio_pid=74334 00:10:32.931 06:49:37 -- target/multipath.sh@118 -- # sleep 1 00:10:33.189 [global] 00:10:33.189 thread=1 00:10:33.189 invalidate=1 00:10:33.189 rw=randrw 00:10:33.189 time_based=1 00:10:33.189 runtime=6 00:10:33.189 ioengine=libaio 00:10:33.189 direct=1 00:10:33.189 bs=4096 00:10:33.189 iodepth=128 00:10:33.189 norandommap=0 00:10:33.189 numjobs=1 00:10:33.189 00:10:33.189 verify_dump=1 00:10:33.189 verify_backlog=512 00:10:33.189 verify_state_save=0 00:10:33.189 do_verify=1 00:10:33.189 verify=crc32c-intel 00:10:33.189 [job0] 00:10:33.189 filename=/dev/nvme0n1 00:10:33.189 Could not set queue depth (nvme0n1) 00:10:33.189 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:33.189 fio-3.35 00:10:33.189 Starting 1 thread 00:10:34.125 06:49:38 -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:10:34.384 06:49:38 -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:10:34.643 06:49:39 -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:10:34.643 06:49:39 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:10:34.643 06:49:39 -- target/multipath.sh@22 -- # local timeout=20 00:10:34.643 06:49:39 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:34.643 06:49:39 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:34.643 06:49:39 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:34.643 06:49:39 -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:10:34.643 06:49:39 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:10:34.643 06:49:39 -- target/multipath.sh@22 -- # local timeout=20 00:10:34.643 06:49:39 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:34.643 06:49:39 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:34.643 06:49:39 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:34.643 06:49:39 -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:10:34.902 06:49:39 -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:10:35.161 06:49:39 -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:10:35.161 06:49:39 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:10:35.161 06:49:39 -- target/multipath.sh@22 -- # local timeout=20 00:10:35.161 06:49:39 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:35.161 06:49:39 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:35.161 06:49:39 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:35.161 06:49:39 -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:10:35.161 06:49:39 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:10:35.161 06:49:39 -- target/multipath.sh@22 -- # local timeout=20 00:10:35.161 06:49:39 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:35.161 06:49:39 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:35.161 06:49:39 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:35.161 06:49:39 -- target/multipath.sh@132 -- # wait 74334 00:10:39.351 00:10:39.351 job0: (groupid=0, jobs=1): err= 0: pid=74355: Fri Dec 13 06:49:43 2024 00:10:39.351 read: IOPS=12.2k, BW=47.8MiB/s (50.1MB/s)(287MiB/6002msec) 00:10:39.351 slat (usec): min=5, max=7617, avg=41.11, stdev=191.59 00:10:39.351 clat (usec): min=870, max=15028, avg=7199.63, stdev=1668.92 00:10:39.351 lat (usec): min=880, max=15057, avg=7240.74, stdev=1682.42 00:10:39.351 clat percentiles (usec): 00:10:39.351 | 1.00th=[ 3589], 5.00th=[ 4424], 10.00th=[ 5014], 20.00th=[ 5800], 00:10:39.351 | 30.00th=[ 6521], 40.00th=[ 6980], 50.00th=[ 7308], 60.00th=[ 7570], 00:10:39.351 | 70.00th=[ 7898], 80.00th=[ 8225], 90.00th=[ 8848], 95.00th=[10159], 00:10:39.351 | 99.00th=[12256], 99.50th=[12649], 99.90th=[13435], 99.95th=[13566], 00:10:39.351 | 99.99th=[13960] 00:10:39.351 bw ( KiB/s): min= 7840, max=41792, per=53.27%, avg=26086.45, stdev=9064.29, samples=11 00:10:39.351 iops : min= 1960, max=10448, avg=6521.55, stdev=2265.99, samples=11 00:10:39.351 write: IOPS=7232, BW=28.3MiB/s (29.6MB/s)(149MiB/5272msec); 0 zone resets 00:10:39.351 slat (usec): min=11, max=2038, avg=51.50, stdev=129.30 00:10:39.351 clat (usec): min=1291, max=13766, avg=6108.98, stdev=1643.79 00:10:39.351 lat (usec): min=1330, max=13926, avg=6160.48, stdev=1657.12 00:10:39.351 clat percentiles (usec): 00:10:39.351 | 1.00th=[ 2671], 5.00th=[ 3261], 10.00th=[ 3687], 20.00th=[ 4359], 00:10:39.351 | 30.00th=[ 5145], 40.00th=[ 6128], 50.00th=[ 6587], 60.00th=[ 6849], 00:10:39.351 | 70.00th=[ 7111], 80.00th=[ 7439], 90.00th=[ 7832], 95.00th=[ 8160], 00:10:39.351 | 99.00th=[10028], 99.50th=[10945], 99.90th=[12387], 99.95th=[12911], 00:10:39.351 | 99.99th=[13304] 00:10:39.351 bw ( KiB/s): min= 8288, max=40840, per=90.06%, avg=26056.55, stdev=8758.18, samples=11 00:10:39.351 iops : min= 2072, max=10210, avg=6514.09, stdev=2189.48, samples=11 00:10:39.351 lat (usec) : 1000=0.02% 00:10:39.351 lat (msec) : 2=0.07%, 4=6.48%, 10=89.62%, 20=3.81% 00:10:39.351 cpu : usr=5.63%, sys=23.13%, ctx=6260, majf=0, minf=108 00:10:39.351 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:39.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.351 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:39.351 issued rwts: total=73480,38132,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:39.351 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:39.351 00:10:39.351 Run status group 0 (all jobs): 00:10:39.351 READ: bw=47.8MiB/s (50.1MB/s), 47.8MiB/s-47.8MiB/s (50.1MB/s-50.1MB/s), io=287MiB (301MB), run=6002-6002msec 00:10:39.351 WRITE: bw=28.3MiB/s (29.6MB/s), 28.3MiB/s-28.3MiB/s (29.6MB/s-29.6MB/s), io=149MiB (156MB), run=5272-5272msec 00:10:39.351 00:10:39.351 Disk stats (read/write): 00:10:39.351 nvme0n1: ios=71946/38132, merge=0/0, ticks=492081/216893, in_queue=708974, util=98.53% 00:10:39.351 06:49:43 -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:39.351 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:39.351 06:49:43 -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:39.351 06:49:43 -- common/autotest_common.sh@1208 -- # local i=0 00:10:39.351 06:49:43 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:10:39.351 06:49:43 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:39.351 06:49:43 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:10:39.351 06:49:43 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:39.351 06:49:43 -- common/autotest_common.sh@1220 -- # return 0 00:10:39.351 06:49:43 -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:39.610 06:49:44 -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:10:39.610 06:49:44 -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:10:39.610 06:49:44 -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:10:39.610 06:49:44 -- target/multipath.sh@144 -- # nvmftestfini 00:10:39.610 06:49:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:39.610 06:49:44 -- nvmf/common.sh@116 -- # sync 00:10:39.610 06:49:44 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:39.610 06:49:44 -- nvmf/common.sh@119 -- # set +e 00:10:39.610 06:49:44 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:39.610 06:49:44 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:39.869 rmmod nvme_tcp 00:10:39.869 rmmod nvme_fabrics 00:10:39.869 rmmod nvme_keyring 00:10:39.869 06:49:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:39.869 06:49:44 -- nvmf/common.sh@123 -- # set -e 00:10:39.869 06:49:44 -- nvmf/common.sh@124 -- # return 0 00:10:39.869 06:49:44 -- nvmf/common.sh@477 -- # '[' -n 74140 ']' 00:10:39.869 06:49:44 -- nvmf/common.sh@478 -- # killprocess 74140 00:10:39.869 06:49:44 -- common/autotest_common.sh@936 -- # '[' -z 74140 ']' 00:10:39.869 06:49:44 -- common/autotest_common.sh@940 -- # kill -0 74140 00:10:39.869 06:49:44 -- common/autotest_common.sh@941 -- # uname 00:10:39.869 06:49:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:39.869 06:49:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74140 00:10:39.869 killing process with pid 74140 00:10:39.869 06:49:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:39.869 06:49:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:39.869 06:49:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74140' 00:10:39.869 06:49:44 -- common/autotest_common.sh@955 -- # kill 74140 00:10:39.869 06:49:44 -- common/autotest_common.sh@960 -- # wait 74140 00:10:39.869 06:49:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:39.869 06:49:44 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:39.869 06:49:44 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:39.869 06:49:44 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:39.869 06:49:44 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:39.869 06:49:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:39.869 06:49:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:39.869 06:49:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.129 06:49:44 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:40.129 ************************************ 00:10:40.129 END TEST nvmf_multipath 00:10:40.129 ************************************ 00:10:40.129 00:10:40.129 real 0m19.298s 00:10:40.129 user 1m12.127s 00:10:40.129 sys 0m9.876s 00:10:40.129 06:49:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:40.129 06:49:44 -- common/autotest_common.sh@10 -- # set +x 00:10:40.129 06:49:44 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:40.129 06:49:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:40.129 06:49:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:40.129 06:49:44 -- common/autotest_common.sh@10 -- # set +x 00:10:40.129 ************************************ 00:10:40.129 START TEST nvmf_zcopy 00:10:40.129 ************************************ 00:10:40.129 06:49:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:40.129 * Looking for test storage... 00:10:40.129 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:40.129 06:49:44 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:40.129 06:49:44 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:40.129 06:49:44 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:40.129 06:49:44 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:40.129 06:49:44 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:40.129 06:49:44 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:40.129 06:49:44 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:40.129 06:49:44 -- scripts/common.sh@335 -- # IFS=.-: 00:10:40.129 06:49:44 -- scripts/common.sh@335 -- # read -ra ver1 00:10:40.129 06:49:44 -- scripts/common.sh@336 -- # IFS=.-: 00:10:40.129 06:49:44 -- scripts/common.sh@336 -- # read -ra ver2 00:10:40.129 06:49:44 -- scripts/common.sh@337 -- # local 'op=<' 00:10:40.129 06:49:44 -- scripts/common.sh@339 -- # ver1_l=2 00:10:40.129 06:49:44 -- scripts/common.sh@340 -- # ver2_l=1 00:10:40.129 06:49:44 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:40.129 06:49:44 -- scripts/common.sh@343 -- # case "$op" in 00:10:40.129 06:49:44 -- scripts/common.sh@344 -- # : 1 00:10:40.129 06:49:44 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:40.129 06:49:44 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:40.129 06:49:44 -- scripts/common.sh@364 -- # decimal 1 00:10:40.129 06:49:44 -- scripts/common.sh@352 -- # local d=1 00:10:40.129 06:49:44 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:40.129 06:49:44 -- scripts/common.sh@354 -- # echo 1 00:10:40.129 06:49:44 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:40.129 06:49:44 -- scripts/common.sh@365 -- # decimal 2 00:10:40.129 06:49:44 -- scripts/common.sh@352 -- # local d=2 00:10:40.129 06:49:44 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:40.129 06:49:44 -- scripts/common.sh@354 -- # echo 2 00:10:40.129 06:49:44 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:40.129 06:49:44 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:40.129 06:49:44 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:40.129 06:49:44 -- scripts/common.sh@367 -- # return 0 00:10:40.129 06:49:44 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:40.129 06:49:44 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:40.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.129 --rc genhtml_branch_coverage=1 00:10:40.129 --rc genhtml_function_coverage=1 00:10:40.129 --rc genhtml_legend=1 00:10:40.129 --rc geninfo_all_blocks=1 00:10:40.129 --rc geninfo_unexecuted_blocks=1 00:10:40.129 00:10:40.129 ' 00:10:40.129 06:49:44 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:40.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.129 --rc genhtml_branch_coverage=1 00:10:40.129 --rc genhtml_function_coverage=1 00:10:40.129 --rc genhtml_legend=1 00:10:40.129 --rc geninfo_all_blocks=1 00:10:40.129 --rc geninfo_unexecuted_blocks=1 00:10:40.129 00:10:40.129 ' 00:10:40.129 06:49:44 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:40.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.129 --rc genhtml_branch_coverage=1 00:10:40.129 --rc genhtml_function_coverage=1 00:10:40.129 --rc genhtml_legend=1 00:10:40.129 --rc geninfo_all_blocks=1 00:10:40.129 --rc geninfo_unexecuted_blocks=1 00:10:40.129 00:10:40.129 ' 00:10:40.129 06:49:44 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:40.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.129 --rc genhtml_branch_coverage=1 00:10:40.129 --rc genhtml_function_coverage=1 00:10:40.129 --rc genhtml_legend=1 00:10:40.129 --rc geninfo_all_blocks=1 00:10:40.129 --rc geninfo_unexecuted_blocks=1 00:10:40.129 00:10:40.129 ' 00:10:40.129 06:49:44 -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:40.129 06:49:44 -- nvmf/common.sh@7 -- # uname -s 00:10:40.129 06:49:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:40.129 06:49:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:40.129 06:49:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:40.129 06:49:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:40.129 06:49:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:40.129 06:49:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:40.129 06:49:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:40.129 06:49:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:40.129 06:49:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:40.129 06:49:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:40.129 06:49:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:657f0c9c-3891-4064-9841-3d87a573b6e7 00:10:40.129 06:49:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=657f0c9c-3891-4064-9841-3d87a573b6e7 00:10:40.129 06:49:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:40.129 06:49:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:40.129 06:49:44 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:40.129 06:49:44 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:40.129 06:49:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:40.129 06:49:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:40.129 06:49:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:40.129 06:49:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.129 06:49:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.129 06:49:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.129 06:49:44 -- paths/export.sh@5 -- # export PATH 00:10:40.130 06:49:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.130 06:49:44 -- nvmf/common.sh@46 -- # : 0 00:10:40.130 06:49:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:40.130 06:49:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:40.130 06:49:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:40.130 06:49:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:40.130 06:49:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:40.130 06:49:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:40.130 06:49:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:40.130 06:49:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:40.130 06:49:44 -- target/zcopy.sh@12 -- # nvmftestinit 00:10:40.130 06:49:44 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:40.130 06:49:44 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:40.130 06:49:44 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:40.130 06:49:44 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:40.130 06:49:44 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:40.130 06:49:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:40.130 06:49:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:40.130 06:49:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.388 06:49:44 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:40.388 06:49:44 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:40.388 06:49:44 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:40.388 06:49:44 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:40.388 06:49:44 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:40.388 06:49:44 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:40.388 06:49:44 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:40.388 06:49:44 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:40.388 06:49:44 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:40.388 06:49:44 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:40.388 06:49:44 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:40.388 06:49:44 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:40.388 06:49:44 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:40.388 06:49:44 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:40.388 06:49:44 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:40.388 06:49:44 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:40.388 06:49:44 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:40.388 06:49:44 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:40.388 06:49:44 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:40.388 06:49:44 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:40.388 Cannot find device "nvmf_tgt_br" 00:10:40.388 06:49:44 -- nvmf/common.sh@154 -- # true 00:10:40.388 06:49:44 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:40.388 Cannot find device "nvmf_tgt_br2" 00:10:40.388 06:49:44 -- nvmf/common.sh@155 -- # true 00:10:40.388 06:49:44 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:40.388 06:49:44 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:40.388 Cannot find device "nvmf_tgt_br" 00:10:40.388 06:49:44 -- nvmf/common.sh@157 -- # true 00:10:40.388 06:49:44 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:40.388 Cannot find device "nvmf_tgt_br2" 00:10:40.388 06:49:44 -- nvmf/common.sh@158 -- # true 00:10:40.388 06:49:44 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:40.388 06:49:44 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:40.388 06:49:44 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:40.388 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:40.388 06:49:44 -- nvmf/common.sh@161 -- # true 00:10:40.388 06:49:44 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:40.388 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:40.388 06:49:44 -- nvmf/common.sh@162 -- # true 00:10:40.388 06:49:44 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:40.388 06:49:44 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:40.388 06:49:44 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:40.388 06:49:44 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:40.388 06:49:44 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:40.388 06:49:44 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:40.388 06:49:44 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:40.388 06:49:44 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:40.388 06:49:44 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:40.647 06:49:44 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:40.647 06:49:44 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:40.648 06:49:44 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:40.648 06:49:44 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:40.648 06:49:44 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:40.648 06:49:44 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:40.648 06:49:44 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:40.648 06:49:44 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:40.648 06:49:44 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:40.648 06:49:44 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:40.648 06:49:44 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:40.648 06:49:44 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:40.648 06:49:44 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:40.648 06:49:44 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:40.648 06:49:44 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:40.648 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:40.648 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:10:40.648 00:10:40.648 --- 10.0.0.2 ping statistics --- 00:10:40.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.648 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:10:40.648 06:49:44 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:40.648 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:40.648 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:10:40.648 00:10:40.648 --- 10.0.0.3 ping statistics --- 00:10:40.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.648 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:10:40.648 06:49:44 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:40.648 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:40.648 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:10:40.648 00:10:40.648 --- 10.0.0.1 ping statistics --- 00:10:40.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.648 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:10:40.648 06:49:45 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:40.648 06:49:45 -- nvmf/common.sh@421 -- # return 0 00:10:40.648 06:49:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:40.648 06:49:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:40.648 06:49:45 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:40.648 06:49:45 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:40.648 06:49:45 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:40.648 06:49:45 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:40.648 06:49:45 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:40.648 06:49:45 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:40.648 06:49:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:40.648 06:49:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:40.648 06:49:45 -- common/autotest_common.sh@10 -- # set +x 00:10:40.648 06:49:45 -- nvmf/common.sh@469 -- # nvmfpid=74614 00:10:40.648 06:49:45 -- nvmf/common.sh@470 -- # waitforlisten 74614 00:10:40.648 06:49:45 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:40.648 06:49:45 -- common/autotest_common.sh@829 -- # '[' -z 74614 ']' 00:10:40.648 06:49:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:40.648 06:49:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:40.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:40.648 06:49:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:40.648 06:49:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:40.648 06:49:45 -- common/autotest_common.sh@10 -- # set +x 00:10:40.648 [2024-12-13 06:49:45.078437] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:40.648 [2024-12-13 06:49:45.078526] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:40.906 [2024-12-13 06:49:45.219242] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:40.906 [2024-12-13 06:49:45.254178] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:40.906 [2024-12-13 06:49:45.254300] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:40.906 [2024-12-13 06:49:45.254313] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:40.906 [2024-12-13 06:49:45.254320] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:40.906 [2024-12-13 06:49:45.254385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:40.906 06:49:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:40.906 06:49:45 -- common/autotest_common.sh@862 -- # return 0 00:10:40.906 06:49:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:40.906 06:49:45 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:40.906 06:49:45 -- common/autotest_common.sh@10 -- # set +x 00:10:40.906 06:49:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:40.906 06:49:45 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:40.906 06:49:45 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:40.906 06:49:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.906 06:49:45 -- common/autotest_common.sh@10 -- # set +x 00:10:40.906 [2024-12-13 06:49:45.373882] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:40.906 06:49:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.906 06:49:45 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:40.906 06:49:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.906 06:49:45 -- common/autotest_common.sh@10 -- # set +x 00:10:40.906 06:49:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.906 06:49:45 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:40.906 06:49:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.906 06:49:45 -- common/autotest_common.sh@10 -- # set +x 00:10:40.906 [2024-12-13 06:49:45.389948] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:40.906 06:49:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.906 06:49:45 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:40.906 06:49:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.906 06:49:45 -- common/autotest_common.sh@10 -- # set +x 00:10:40.906 06:49:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.906 06:49:45 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:40.906 06:49:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.906 06:49:45 -- common/autotest_common.sh@10 -- # set +x 00:10:40.906 malloc0 00:10:40.906 06:49:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.906 06:49:45 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:40.906 06:49:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.906 06:49:45 -- common/autotest_common.sh@10 -- # set +x 00:10:40.906 06:49:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.907 06:49:45 -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:40.907 06:49:45 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:40.907 06:49:45 -- nvmf/common.sh@520 -- # config=() 00:10:40.907 06:49:45 -- nvmf/common.sh@520 -- # local subsystem config 00:10:40.907 06:49:45 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:10:40.907 06:49:45 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:10:40.907 { 00:10:40.907 "params": { 00:10:40.907 "name": "Nvme$subsystem", 00:10:40.907 "trtype": "$TEST_TRANSPORT", 00:10:40.907 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:40.907 "adrfam": "ipv4", 00:10:40.907 "trsvcid": "$NVMF_PORT", 00:10:40.907 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:40.907 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:40.907 "hdgst": ${hdgst:-false}, 00:10:40.907 "ddgst": ${ddgst:-false} 00:10:40.907 }, 00:10:40.907 "method": "bdev_nvme_attach_controller" 00:10:40.907 } 00:10:40.907 EOF 00:10:40.907 )") 00:10:40.907 06:49:45 -- nvmf/common.sh@542 -- # cat 00:10:41.165 06:49:45 -- nvmf/common.sh@544 -- # jq . 00:10:41.165 06:49:45 -- nvmf/common.sh@545 -- # IFS=, 00:10:41.165 06:49:45 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:10:41.165 "params": { 00:10:41.165 "name": "Nvme1", 00:10:41.165 "trtype": "tcp", 00:10:41.165 "traddr": "10.0.0.2", 00:10:41.165 "adrfam": "ipv4", 00:10:41.165 "trsvcid": "4420", 00:10:41.165 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:41.165 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:41.165 "hdgst": false, 00:10:41.165 "ddgst": false 00:10:41.165 }, 00:10:41.165 "method": "bdev_nvme_attach_controller" 00:10:41.165 }' 00:10:41.165 [2024-12-13 06:49:45.473469] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:41.165 [2024-12-13 06:49:45.473558] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74639 ] 00:10:41.165 [2024-12-13 06:49:45.615645] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.165 [2024-12-13 06:49:45.654612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.423 Running I/O for 10 seconds... 00:10:51.393 00:10:51.393 Latency(us) 00:10:51.393 [2024-12-13T06:49:55.912Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:51.393 [2024-12-13T06:49:55.912Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:51.393 Verification LBA range: start 0x0 length 0x1000 00:10:51.393 Nvme1n1 : 10.01 9896.00 77.31 0.00 0.00 12901.84 1251.14 19184.17 00:10:51.393 [2024-12-13T06:49:55.912Z] =================================================================================================================== 00:10:51.393 [2024-12-13T06:49:55.912Z] Total : 9896.00 77.31 0.00 0.00 12901.84 1251.14 19184.17 00:10:51.652 06:49:55 -- target/zcopy.sh@39 -- # perfpid=74751 00:10:51.652 06:49:55 -- target/zcopy.sh@41 -- # xtrace_disable 00:10:51.652 06:49:55 -- common/autotest_common.sh@10 -- # set +x 00:10:51.652 06:49:55 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:51.652 06:49:55 -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:51.652 06:49:55 -- nvmf/common.sh@520 -- # config=() 00:10:51.652 06:49:55 -- nvmf/common.sh@520 -- # local subsystem config 00:10:51.652 06:49:55 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:10:51.652 06:49:55 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:10:51.652 { 00:10:51.652 "params": { 00:10:51.652 "name": "Nvme$subsystem", 00:10:51.652 "trtype": "$TEST_TRANSPORT", 00:10:51.652 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:51.652 "adrfam": "ipv4", 00:10:51.652 "trsvcid": "$NVMF_PORT", 00:10:51.652 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:51.652 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:51.652 "hdgst": ${hdgst:-false}, 00:10:51.652 "ddgst": ${ddgst:-false} 00:10:51.652 }, 00:10:51.652 "method": "bdev_nvme_attach_controller" 00:10:51.652 } 00:10:51.652 EOF 00:10:51.652 )") 00:10:51.652 [2024-12-13 06:49:55.940480] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.652 [2024-12-13 06:49:55.940702] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.652 06:49:55 -- nvmf/common.sh@542 -- # cat 00:10:51.652 06:49:55 -- nvmf/common.sh@544 -- # jq . 00:10:51.652 [2024-12-13 06:49:55.948435] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.652 [2024-12-13 06:49:55.948464] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.652 06:49:55 -- nvmf/common.sh@545 -- # IFS=, 00:10:51.652 06:49:55 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:10:51.652 "params": { 00:10:51.652 "name": "Nvme1", 00:10:51.652 "trtype": "tcp", 00:10:51.652 "traddr": "10.0.0.2", 00:10:51.652 "adrfam": "ipv4", 00:10:51.652 "trsvcid": "4420", 00:10:51.652 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:51.652 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:51.652 "hdgst": false, 00:10:51.652 "ddgst": false 00:10:51.652 }, 00:10:51.652 "method": "bdev_nvme_attach_controller" 00:10:51.652 }' 00:10:51.652 [2024-12-13 06:49:55.956433] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.652 [2024-12-13 06:49:55.956473] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.652 [2024-12-13 06:49:55.964434] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.652 [2024-12-13 06:49:55.964492] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.652 [2024-12-13 06:49:55.976433] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.652 [2024-12-13 06:49:55.976468] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.652 [2024-12-13 06:49:55.984425] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.652 [2024-12-13 06:49:55.984480] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.652 [2024-12-13 06:49:55.986798] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:51.652 [2024-12-13 06:49:55.986885] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74751 ] 00:10:51.652 [2024-12-13 06:49:55.992425] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.652 [2024-12-13 06:49:55.992460] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.652 [2024-12-13 06:49:56.000439] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.652 [2024-12-13 06:49:56.000464] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.652 [2024-12-13 06:49:56.008461] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.652 [2024-12-13 06:49:56.008486] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.652 [2024-12-13 06:49:56.016426] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.652 [2024-12-13 06:49:56.016480] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.652 [2024-12-13 06:49:56.024449] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.652 [2024-12-13 06:49:56.024473] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.652 [2024-12-13 06:49:56.032432] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.652 [2024-12-13 06:49:56.032464] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.652 [2024-12-13 06:49:56.040433] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.652 [2024-12-13 06:49:56.040465] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.652 [2024-12-13 06:49:56.048444] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.652 [2024-12-13 06:49:56.048468] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.652 [2024-12-13 06:49:56.056467] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.652 [2024-12-13 06:49:56.056493] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.652 [2024-12-13 06:49:56.064498] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.652 [2024-12-13 06:49:56.064533] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.652 [2024-12-13 06:49:56.072496] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.652 [2024-12-13 06:49:56.072525] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.652 [2024-12-13 06:49:56.080499] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.652 [2024-12-13 06:49:56.080787] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.652 [2024-12-13 06:49:56.088513] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.652 [2024-12-13 06:49:56.088551] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.652 [2024-12-13 06:49:56.096504] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.652 [2024-12-13 06:49:56.096539] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.652 [2024-12-13 06:49:56.104526] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.652 [2024-12-13 06:49:56.104560] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.652 [2024-12-13 06:49:56.112486] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.652 [2024-12-13 06:49:56.112512] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.652 [2024-12-13 06:49:56.120479] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.652 [2024-12-13 06:49:56.120504] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.652 [2024-12-13 06:49:56.124672] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:51.652 [2024-12-13 06:49:56.128525] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.652 [2024-12-13 06:49:56.128561] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.652 [2024-12-13 06:49:56.140518] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.652 [2024-12-13 06:49:56.140557] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.652 [2024-12-13 06:49:56.152545] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.652 [2024-12-13 06:49:56.152584] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.652 [2024-12-13 06:49:56.159213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.652 [2024-12-13 06:49:56.160529] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.653 [2024-12-13 06:49:56.160557] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.653 [2024-12-13 06:49:56.168546] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.653 [2024-12-13 06:49:56.168584] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.911 [2024-12-13 06:49:56.176562] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.911 [2024-12-13 06:49:56.176631] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.911 [2024-12-13 06:49:56.184542] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.911 [2024-12-13 06:49:56.184581] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.911 [2024-12-13 06:49:56.192545] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.911 [2024-12-13 06:49:56.192599] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.911 [2024-12-13 06:49:56.204559] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.911 [2024-12-13 06:49:56.204597] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.911 [2024-12-13 06:49:56.212547] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.911 [2024-12-13 06:49:56.212584] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.911 [2024-12-13 06:49:56.220523] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.911 [2024-12-13 06:49:56.220549] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.911 [2024-12-13 06:49:56.228552] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.911 [2024-12-13 06:49:56.228748] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.911 [2024-12-13 06:49:56.236573] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.911 [2024-12-13 06:49:56.236609] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.911 [2024-12-13 06:49:56.244589] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.911 [2024-12-13 06:49:56.244641] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.911 [2024-12-13 06:49:56.256584] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.911 [2024-12-13 06:49:56.256618] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.911 [2024-12-13 06:49:56.264582] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.911 [2024-12-13 06:49:56.264783] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.911 [2024-12-13 06:49:56.272593] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.912 [2024-12-13 06:49:56.272625] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.912 [2024-12-13 06:49:56.280595] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.912 [2024-12-13 06:49:56.280623] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.912 [2024-12-13 06:49:56.288607] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.912 [2024-12-13 06:49:56.288638] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.912 Running I/O for 5 seconds... 00:10:51.912 [2024-12-13 06:49:56.296616] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.912 [2024-12-13 06:49:56.296645] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.912 [2024-12-13 06:49:56.309478] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.912 [2024-12-13 06:49:56.309512] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.912 [2024-12-13 06:49:56.319199] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.912 [2024-12-13 06:49:56.319230] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.912 [2024-12-13 06:49:56.328502] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.912 [2024-12-13 06:49:56.328533] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.912 [2024-12-13 06:49:56.337760] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.912 [2024-12-13 06:49:56.337790] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.912 [2024-12-13 06:49:56.347107] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.912 [2024-12-13 06:49:56.347138] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.912 [2024-12-13 06:49:56.356379] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.912 [2024-12-13 06:49:56.356580] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.912 [2024-12-13 06:49:56.370257] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.912 [2024-12-13 06:49:56.370290] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.912 [2024-12-13 06:49:56.378597] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.912 [2024-12-13 06:49:56.378629] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.912 [2024-12-13 06:49:56.389925] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.912 [2024-12-13 06:49:56.389956] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.912 [2024-12-13 06:49:56.400920] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.912 [2024-12-13 06:49:56.400952] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.912 [2024-12-13 06:49:56.409314] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.912 [2024-12-13 06:49:56.409345] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.912 [2024-12-13 06:49:56.419554] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.912 [2024-12-13 06:49:56.419586] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.170 [2024-12-13 06:49:56.430142] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.170 [2024-12-13 06:49:56.430197] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.170 [2024-12-13 06:49:56.442180] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.170 [2024-12-13 06:49:56.442273] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.170 [2024-12-13 06:49:56.451155] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.170 [2024-12-13 06:49:56.451457] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.170 [2024-12-13 06:49:56.462132] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.170 [2024-12-13 06:49:56.462184] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.170 [2024-12-13 06:49:56.474340] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.170 [2024-12-13 06:49:56.474434] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.170 [2024-12-13 06:49:56.492698] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.170 [2024-12-13 06:49:56.492750] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.170 [2024-12-13 06:49:56.508088] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.170 [2024-12-13 06:49:56.508128] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.170 [2024-12-13 06:49:56.516817] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.170 [2024-12-13 06:49:56.516851] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.170 [2024-12-13 06:49:56.528621] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.170 [2024-12-13 06:49:56.528653] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.170 [2024-12-13 06:49:56.538152] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.170 [2024-12-13 06:49:56.538338] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.170 [2024-12-13 06:49:56.548548] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.170 [2024-12-13 06:49:56.548581] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.170 [2024-12-13 06:49:56.558200] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.170 [2024-12-13 06:49:56.558406] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.171 [2024-12-13 06:49:56.568398] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.171 [2024-12-13 06:49:56.568462] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.171 [2024-12-13 06:49:56.578440] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.171 [2024-12-13 06:49:56.578646] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.171 [2024-12-13 06:49:56.593038] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.171 [2024-12-13 06:49:56.593072] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.171 [2024-12-13 06:49:56.601847] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.171 [2024-12-13 06:49:56.602031] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.171 [2024-12-13 06:49:56.614057] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.171 [2024-12-13 06:49:56.614090] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.171 [2024-12-13 06:49:56.623228] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.171 [2024-12-13 06:49:56.623259] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.171 [2024-12-13 06:49:56.635266] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.171 [2024-12-13 06:49:56.635299] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.171 [2024-12-13 06:49:56.644756] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.171 [2024-12-13 06:49:56.644936] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.171 [2024-12-13 06:49:56.654988] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.171 [2024-12-13 06:49:56.655021] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.171 [2024-12-13 06:49:56.664499] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.171 [2024-12-13 06:49:56.664530] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.171 [2024-12-13 06:49:56.676058] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.171 [2024-12-13 06:49:56.676274] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.171 [2024-12-13 06:49:56.684824] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.171 [2024-12-13 06:49:56.685000] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.430 [2024-12-13 06:49:56.696791] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.430 [2024-12-13 06:49:56.696973] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.430 [2024-12-13 06:49:56.706442] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.430 [2024-12-13 06:49:56.706623] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.430 [2024-12-13 06:49:56.716112] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.430 [2024-12-13 06:49:56.716286] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.430 [2024-12-13 06:49:56.725490] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.430 [2024-12-13 06:49:56.725671] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.430 [2024-12-13 06:49:56.735003] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.430 [2024-12-13 06:49:56.735182] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.430 [2024-12-13 06:49:56.744796] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.430 [2024-12-13 06:49:56.744992] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.430 [2024-12-13 06:49:56.754557] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.430 [2024-12-13 06:49:56.754720] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.430 [2024-12-13 06:49:56.765408] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.430 [2024-12-13 06:49:56.765614] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.430 [2024-12-13 06:49:56.777565] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.430 [2024-12-13 06:49:56.777764] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.430 [2024-12-13 06:49:56.789098] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.430 [2024-12-13 06:49:56.789294] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.430 [2024-12-13 06:49:56.797581] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.430 [2024-12-13 06:49:56.797744] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.430 [2024-12-13 06:49:56.809406] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.430 [2024-12-13 06:49:56.809570] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.430 [2024-12-13 06:49:56.819100] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.430 [2024-12-13 06:49:56.819276] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.430 [2024-12-13 06:49:56.833075] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.430 [2024-12-13 06:49:56.833252] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.430 [2024-12-13 06:49:56.842175] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.430 [2024-12-13 06:49:56.842411] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.430 [2024-12-13 06:49:56.855002] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.430 [2024-12-13 06:49:56.855179] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.430 [2024-12-13 06:49:56.864611] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.430 [2024-12-13 06:49:56.864820] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.430 [2024-12-13 06:49:56.878877] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.430 [2024-12-13 06:49:56.878910] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.430 [2024-12-13 06:49:56.887713] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.430 [2024-12-13 06:49:56.887892] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.430 [2024-12-13 06:49:56.898137] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.430 [2024-12-13 06:49:56.898188] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.430 [2024-12-13 06:49:56.907906] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.430 [2024-12-13 06:49:56.907980] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.430 [2024-12-13 06:49:56.922539] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.430 [2024-12-13 06:49:56.922571] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.430 [2024-12-13 06:49:56.932874] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.430 [2024-12-13 06:49:56.932912] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.430 [2024-12-13 06:49:56.947490] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.430 [2024-12-13 06:49:56.947557] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.689 [2024-12-13 06:49:56.963590] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.689 [2024-12-13 06:49:56.963625] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.689 [2024-12-13 06:49:56.973183] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.689 [2024-12-13 06:49:56.973409] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.689 [2024-12-13 06:49:56.982991] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.689 [2024-12-13 06:49:56.983024] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.689 [2024-12-13 06:49:56.993006] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.689 [2024-12-13 06:49:56.993172] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.689 [2024-12-13 06:49:57.003159] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.689 [2024-12-13 06:49:57.003193] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.689 [2024-12-13 06:49:57.013385] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.689 [2024-12-13 06:49:57.013460] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.689 [2024-12-13 06:49:57.023524] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.689 [2024-12-13 06:49:57.023557] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.689 [2024-12-13 06:49:57.033567] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.689 [2024-12-13 06:49:57.033598] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.689 [2024-12-13 06:49:57.043443] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.689 [2024-12-13 06:49:57.043475] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.689 [2024-12-13 06:49:57.053282] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.689 [2024-12-13 06:49:57.053314] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.689 [2024-12-13 06:49:57.063116] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.689 [2024-12-13 06:49:57.063296] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.689 [2024-12-13 06:49:57.074579] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.689 [2024-12-13 06:49:57.074791] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.689 [2024-12-13 06:49:57.085517] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.689 [2024-12-13 06:49:57.085667] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.689 [2024-12-13 06:49:57.101073] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.689 [2024-12-13 06:49:57.101250] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.689 [2024-12-13 06:49:57.117645] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.689 [2024-12-13 06:49:57.117859] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.689 [2024-12-13 06:49:57.127069] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.689 [2024-12-13 06:49:57.127264] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.689 [2024-12-13 06:49:57.141872] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.689 [2024-12-13 06:49:57.142049] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.689 [2024-12-13 06:49:57.150269] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.689 [2024-12-13 06:49:57.150456] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.689 [2024-12-13 06:49:57.162121] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.689 [2024-12-13 06:49:57.162299] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.689 [2024-12-13 06:49:57.171723] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.689 [2024-12-13 06:49:57.171899] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.689 [2024-12-13 06:49:57.181888] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.689 [2024-12-13 06:49:57.182065] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.689 [2024-12-13 06:49:57.191552] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.689 [2024-12-13 06:49:57.191745] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.689 [2024-12-13 06:49:57.201384] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.689 [2024-12-13 06:49:57.201582] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.948 [2024-12-13 06:49:57.212885] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.948 [2024-12-13 06:49:57.213074] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.948 [2024-12-13 06:49:57.222652] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.948 [2024-12-13 06:49:57.222847] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.948 [2024-12-13 06:49:57.233040] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.948 [2024-12-13 06:49:57.233218] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.948 [2024-12-13 06:49:57.243065] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.948 [2024-12-13 06:49:57.243242] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.948 [2024-12-13 06:49:57.253387] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.948 [2024-12-13 06:49:57.253592] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.948 [2024-12-13 06:49:57.265493] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.948 [2024-12-13 06:49:57.265546] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.948 [2024-12-13 06:49:57.277379] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.948 [2024-12-13 06:49:57.277467] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.948 [2024-12-13 06:49:57.294022] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.948 [2024-12-13 06:49:57.294057] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.948 [2024-12-13 06:49:57.303792] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.948 [2024-12-13 06:49:57.303989] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.948 [2024-12-13 06:49:57.315216] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.948 [2024-12-13 06:49:57.315413] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.948 [2024-12-13 06:49:57.332020] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.948 [2024-12-13 06:49:57.332058] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.948 [2024-12-13 06:49:57.341773] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.948 [2024-12-13 06:49:57.341806] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.948 [2024-12-13 06:49:57.352520] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.948 [2024-12-13 06:49:57.352553] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.948 [2024-12-13 06:49:57.365150] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.948 [2024-12-13 06:49:57.365183] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.948 [2024-12-13 06:49:57.374417] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.948 [2024-12-13 06:49:57.374448] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.948 [2024-12-13 06:49:57.386483] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.948 [2024-12-13 06:49:57.386515] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.948 [2024-12-13 06:49:57.395874] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.948 [2024-12-13 06:49:57.395906] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.948 [2024-12-13 06:49:57.405997] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.948 [2024-12-13 06:49:57.406181] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.948 [2024-12-13 06:49:57.416328] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.948 [2024-12-13 06:49:57.416423] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.948 [2024-12-13 06:49:57.426114] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.948 [2024-12-13 06:49:57.426147] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.949 [2024-12-13 06:49:57.435967] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.949 [2024-12-13 06:49:57.436004] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.949 [2024-12-13 06:49:57.446315] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.949 [2024-12-13 06:49:57.446394] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.949 [2024-12-13 06:49:57.455611] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.949 [2024-12-13 06:49:57.455644] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.949 [2024-12-13 06:49:57.465647] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.949 [2024-12-13 06:49:57.465684] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.208 [2024-12-13 06:49:57.476817] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.208 [2024-12-13 06:49:57.477002] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.208 [2024-12-13 06:49:57.489010] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.208 [2024-12-13 06:49:57.489190] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.208 [2024-12-13 06:49:57.498687] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.208 [2024-12-13 06:49:57.498882] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.208 [2024-12-13 06:49:57.509587] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.208 [2024-12-13 06:49:57.509895] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.208 [2024-12-13 06:49:57.522079] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.208 [2024-12-13 06:49:57.522401] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.208 [2024-12-13 06:49:57.533572] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.208 [2024-12-13 06:49:57.533893] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.208 [2024-12-13 06:49:57.550834] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.208 [2024-12-13 06:49:57.551118] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.208 [2024-12-13 06:49:57.565468] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.208 [2024-12-13 06:49:57.565759] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.208 [2024-12-13 06:49:57.574636] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.208 [2024-12-13 06:49:57.574896] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.208 [2024-12-13 06:49:57.584962] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.208 [2024-12-13 06:49:57.585150] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.208 [2024-12-13 06:49:57.602230] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.208 [2024-12-13 06:49:57.602541] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.208 [2024-12-13 06:49:57.619645] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.208 [2024-12-13 06:49:57.619843] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.208 [2024-12-13 06:49:57.631150] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.208 [2024-12-13 06:49:57.631419] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.208 [2024-12-13 06:49:57.642285] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.208 [2024-12-13 06:49:57.642491] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.208 [2024-12-13 06:49:57.652159] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.208 [2024-12-13 06:49:57.652398] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.208 [2024-12-13 06:49:57.662579] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.208 [2024-12-13 06:49:57.662742] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.208 [2024-12-13 06:49:57.672642] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.208 [2024-12-13 06:49:57.672810] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.208 [2024-12-13 06:49:57.682677] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.208 [2024-12-13 06:49:57.682870] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.208 [2024-12-13 06:49:57.692822] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.208 [2024-12-13 06:49:57.693002] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.208 [2024-12-13 06:49:57.703013] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.208 [2024-12-13 06:49:57.703190] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.208 [2024-12-13 06:49:57.713042] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.208 [2024-12-13 06:49:57.713214] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.208 [2024-12-13 06:49:57.722549] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.208 [2024-12-13 06:49:57.722750] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.467 [2024-12-13 06:49:57.733306] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.467 [2024-12-13 06:49:57.733518] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.467 [2024-12-13 06:49:57.742757] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.467 [2024-12-13 06:49:57.742931] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.467 [2024-12-13 06:49:57.754087] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.467 [2024-12-13 06:49:57.754265] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.467 [2024-12-13 06:49:57.762875] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.467 [2024-12-13 06:49:57.763049] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.467 [2024-12-13 06:49:57.773256] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.467 [2024-12-13 06:49:57.773440] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.467 [2024-12-13 06:49:57.782542] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.467 [2024-12-13 06:49:57.782572] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.467 [2024-12-13 06:49:57.793818] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.467 [2024-12-13 06:49:57.793849] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.467 [2024-12-13 06:49:57.802567] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.467 [2024-12-13 06:49:57.802600] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.467 [2024-12-13 06:49:57.812949] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.467 [2024-12-13 06:49:57.813128] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.467 [2024-12-13 06:49:57.823531] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.467 [2024-12-13 06:49:57.823570] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.467 [2024-12-13 06:49:57.833966] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.467 [2024-12-13 06:49:57.834005] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.467 [2024-12-13 06:49:57.851569] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.467 [2024-12-13 06:49:57.851625] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.467 [2024-12-13 06:49:57.868295] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.467 [2024-12-13 06:49:57.868395] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.467 [2024-12-13 06:49:57.884597] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.467 [2024-12-13 06:49:57.884649] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.467 [2024-12-13 06:49:57.894597] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.467 [2024-12-13 06:49:57.894634] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.467 [2024-12-13 06:49:57.905548] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.467 [2024-12-13 06:49:57.905581] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.467 [2024-12-13 06:49:57.917215] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.467 [2024-12-13 06:49:57.917443] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.467 [2024-12-13 06:49:57.926221] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.467 [2024-12-13 06:49:57.926254] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.467 [2024-12-13 06:49:57.938052] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.467 [2024-12-13 06:49:57.938088] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.467 [2024-12-13 06:49:57.953076] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.467 [2024-12-13 06:49:57.953285] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.467 [2024-12-13 06:49:57.963311] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.467 [2024-12-13 06:49:57.963344] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.467 [2024-12-13 06:49:57.974186] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.467 [2024-12-13 06:49:57.974218] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.726 [2024-12-13 06:49:57.992578] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.726 [2024-12-13 06:49:57.992613] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.726 [2024-12-13 06:49:58.007534] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.726 [2024-12-13 06:49:58.007567] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.726 [2024-12-13 06:49:58.018254] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.726 [2024-12-13 06:49:58.018287] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.726 [2024-12-13 06:49:58.026026] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.726 [2024-12-13 06:49:58.026208] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.726 [2024-12-13 06:49:58.038398] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.726 [2024-12-13 06:49:58.038432] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.726 [2024-12-13 06:49:58.049612] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.726 [2024-12-13 06:49:58.049661] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.726 [2024-12-13 06:49:58.057719] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.726 [2024-12-13 06:49:58.057751] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.726 [2024-12-13 06:49:58.069993] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.726 [2024-12-13 06:49:58.070026] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.726 [2024-12-13 06:49:58.079254] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.726 [2024-12-13 06:49:58.079286] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.726 [2024-12-13 06:49:58.089111] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.726 [2024-12-13 06:49:58.089292] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.726 [2024-12-13 06:49:58.099400] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.726 [2024-12-13 06:49:58.099583] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.726 [2024-12-13 06:49:58.110768] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.726 [2024-12-13 06:49:58.110951] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.726 [2024-12-13 06:49:58.121850] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.726 [2024-12-13 06:49:58.122028] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.726 [2024-12-13 06:49:58.133450] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.726 [2024-12-13 06:49:58.133640] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.726 [2024-12-13 06:49:58.143560] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.726 [2024-12-13 06:49:58.143759] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.726 [2024-12-13 06:49:58.156861] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.726 [2024-12-13 06:49:58.157041] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.726 [2024-12-13 06:49:58.165903] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.726 [2024-12-13 06:49:58.166082] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.726 [2024-12-13 06:49:58.176646] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.726 [2024-12-13 06:49:58.176808] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.726 [2024-12-13 06:49:58.187899] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.726 [2024-12-13 06:49:58.188110] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.726 [2024-12-13 06:49:58.196784] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.726 [2024-12-13 06:49:58.196817] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.726 [2024-12-13 06:49:58.208301] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.726 [2024-12-13 06:49:58.208376] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.726 [2024-12-13 06:49:58.219780] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.726 [2024-12-13 06:49:58.219812] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.726 [2024-12-13 06:49:58.228402] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.726 [2024-12-13 06:49:58.228457] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.726 [2024-12-13 06:49:58.239843] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.726 [2024-12-13 06:49:58.239875] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.985 [2024-12-13 06:49:58.250424] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.985 [2024-12-13 06:49:58.250461] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.985 [2024-12-13 06:49:58.260308] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.985 [2024-12-13 06:49:58.260381] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.985 [2024-12-13 06:49:58.269868] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.985 [2024-12-13 06:49:58.269900] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.985 [2024-12-13 06:49:58.279624] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.985 [2024-12-13 06:49:58.279657] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.985 [2024-12-13 06:49:58.289277] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.985 [2024-12-13 06:49:58.289485] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.985 [2024-12-13 06:49:58.299408] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.985 [2024-12-13 06:49:58.299589] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.985 [2024-12-13 06:49:58.309286] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.985 [2024-12-13 06:49:58.309494] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.985 [2024-12-13 06:49:58.319078] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.985 [2024-12-13 06:49:58.319256] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.985 [2024-12-13 06:49:58.328931] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.985 [2024-12-13 06:49:58.329108] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.985 [2024-12-13 06:49:58.339045] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.985 [2024-12-13 06:49:58.339235] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.985 [2024-12-13 06:49:58.351759] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.985 [2024-12-13 06:49:58.351939] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.985 [2024-12-13 06:49:58.367618] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.985 [2024-12-13 06:49:58.367830] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.985 [2024-12-13 06:49:58.382657] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.985 [2024-12-13 06:49:58.382846] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.985 [2024-12-13 06:49:58.392181] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.985 [2024-12-13 06:49:58.392437] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.985 [2024-12-13 06:49:58.405190] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.985 [2024-12-13 06:49:58.405401] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.985 [2024-12-13 06:49:58.415867] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.985 [2024-12-13 06:49:58.416051] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.985 [2024-12-13 06:49:58.428916] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.985 [2024-12-13 06:49:58.429087] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.985 [2024-12-13 06:49:58.438197] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.985 [2024-12-13 06:49:58.438404] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.985 [2024-12-13 06:49:58.448714] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.985 [2024-12-13 06:49:58.448890] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.985 [2024-12-13 06:49:58.458782] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.985 [2024-12-13 06:49:58.458976] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.985 [2024-12-13 06:49:58.468849] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.985 [2024-12-13 06:49:58.469026] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.985 [2024-12-13 06:49:58.482524] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.986 [2024-12-13 06:49:58.482724] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.986 [2024-12-13 06:49:58.490974] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.986 [2024-12-13 06:49:58.491151] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.244 [2024-12-13 06:49:58.506701] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.244 [2024-12-13 06:49:58.506915] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.244 [2024-12-13 06:49:58.516259] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.244 [2024-12-13 06:49:58.516500] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.244 [2024-12-13 06:49:58.528378] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.244 [2024-12-13 06:49:58.528437] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.244 [2024-12-13 06:49:58.540526] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.244 [2024-12-13 06:49:58.540558] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.244 [2024-12-13 06:49:58.555909] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.244 [2024-12-13 06:49:58.555984] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.244 [2024-12-13 06:49:58.566874] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.244 [2024-12-13 06:49:58.566907] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.244 [2024-12-13 06:49:58.574721] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.244 [2024-12-13 06:49:58.574753] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.244 [2024-12-13 06:49:58.587552] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.244 [2024-12-13 06:49:58.587585] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.244 [2024-12-13 06:49:58.598721] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.244 [2024-12-13 06:49:58.598754] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.244 [2024-12-13 06:49:58.606794] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.244 [2024-12-13 06:49:58.606826] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.244 [2024-12-13 06:49:58.619212] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.244 [2024-12-13 06:49:58.619244] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.244 [2024-12-13 06:49:58.628388] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.244 [2024-12-13 06:49:58.628450] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.244 [2024-12-13 06:49:58.638296] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.244 [2024-12-13 06:49:58.638508] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.244 [2024-12-13 06:49:58.648519] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.244 [2024-12-13 06:49:58.648714] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.244 [2024-12-13 06:49:58.658328] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.244 [2024-12-13 06:49:58.658537] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.245 [2024-12-13 06:49:58.668330] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.245 [2024-12-13 06:49:58.668518] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.245 [2024-12-13 06:49:58.678464] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.245 [2024-12-13 06:49:58.678643] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.245 [2024-12-13 06:49:58.688397] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.245 [2024-12-13 06:49:58.688589] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.245 [2024-12-13 06:49:58.698131] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.245 [2024-12-13 06:49:58.698325] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.245 [2024-12-13 06:49:58.709068] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.245 [2024-12-13 06:49:58.709302] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.245 [2024-12-13 06:49:58.721231] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.245 [2024-12-13 06:49:58.721423] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.245 [2024-12-13 06:49:58.729861] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.245 [2024-12-13 06:49:58.730040] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.245 [2024-12-13 06:49:58.742699] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.245 [2024-12-13 06:49:58.742893] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.245 [2024-12-13 06:49:58.754113] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.245 [2024-12-13 06:49:58.754290] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.245 [2024-12-13 06:49:58.762864] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.245 [2024-12-13 06:49:58.763044] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.504 [2024-12-13 06:49:58.774579] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.504 [2024-12-13 06:49:58.774748] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.504 [2024-12-13 06:49:58.790327] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.504 [2024-12-13 06:49:58.790522] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.504 [2024-12-13 06:49:58.801603] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.504 [2024-12-13 06:49:58.801765] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.504 [2024-12-13 06:49:58.818194] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.504 [2024-12-13 06:49:58.818399] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.504 [2024-12-13 06:49:58.834749] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.504 [2024-12-13 06:49:58.834926] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.504 [2024-12-13 06:49:58.844071] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.504 [2024-12-13 06:49:58.844286] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.504 [2024-12-13 06:49:58.854201] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.504 [2024-12-13 06:49:58.854424] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.504 [2024-12-13 06:49:58.863899] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.504 [2024-12-13 06:49:58.864124] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.504 [2024-12-13 06:49:58.878662] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.504 [2024-12-13 06:49:58.878859] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.504 [2024-12-13 06:49:58.897423] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.504 [2024-12-13 06:49:58.897483] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.504 [2024-12-13 06:49:58.907421] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.504 [2024-12-13 06:49:58.907462] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.504 [2024-12-13 06:49:58.921589] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.504 [2024-12-13 06:49:58.921649] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.504 [2024-12-13 06:49:58.930478] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.504 [2024-12-13 06:49:58.930526] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.504 [2024-12-13 06:49:58.944475] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.504 [2024-12-13 06:49:58.944520] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.504 [2024-12-13 06:49:58.953395] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.504 [2024-12-13 06:49:58.953429] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.504 [2024-12-13 06:49:58.968916] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.504 [2024-12-13 06:49:58.968967] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.504 [2024-12-13 06:49:58.984750] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.504 [2024-12-13 06:49:58.984823] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.504 [2024-12-13 06:49:58.993802] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.504 [2024-12-13 06:49:58.993836] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.504 [2024-12-13 06:49:59.006085] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.504 [2024-12-13 06:49:59.006119] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.504 [2024-12-13 06:49:59.016154] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.504 [2024-12-13 06:49:59.016190] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.763 [2024-12-13 06:49:59.029281] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.763 [2024-12-13 06:49:59.029376] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.763 [2024-12-13 06:49:59.041588] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.763 [2024-12-13 06:49:59.041621] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.763 [2024-12-13 06:49:59.053208] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.763 [2024-12-13 06:49:59.053261] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.763 [2024-12-13 06:49:59.071412] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.763 [2024-12-13 06:49:59.071651] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.763 [2024-12-13 06:49:59.087454] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.763 [2024-12-13 06:49:59.087489] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.763 [2024-12-13 06:49:59.095655] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.763 [2024-12-13 06:49:59.095689] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.763 [2024-12-13 06:49:59.106761] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.763 [2024-12-13 06:49:59.106945] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.763 [2024-12-13 06:49:59.117600] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.763 [2024-12-13 06:49:59.117635] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.763 [2024-12-13 06:49:59.134834] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.763 [2024-12-13 06:49:59.134866] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.763 [2024-12-13 06:49:59.151627] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.763 [2024-12-13 06:49:59.151662] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.763 [2024-12-13 06:49:59.161917] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.763 [2024-12-13 06:49:59.161950] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.763 [2024-12-13 06:49:59.175745] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.763 [2024-12-13 06:49:59.175775] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.763 [2024-12-13 06:49:59.183958] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.763 [2024-12-13 06:49:59.183992] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.763 [2024-12-13 06:49:59.195557] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.763 [2024-12-13 06:49:59.195588] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.763 [2024-12-13 06:49:59.207664] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.763 [2024-12-13 06:49:59.207712] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.763 [2024-12-13 06:49:59.216090] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.763 [2024-12-13 06:49:59.216123] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.763 [2024-12-13 06:49:59.225663] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.763 [2024-12-13 06:49:59.225695] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.763 [2024-12-13 06:49:59.236791] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.763 [2024-12-13 06:49:59.236972] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.763 [2024-12-13 06:49:59.245054] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.763 [2024-12-13 06:49:59.245227] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.763 [2024-12-13 06:49:59.256594] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.763 [2024-12-13 06:49:59.256754] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.763 [2024-12-13 06:49:59.267749] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.763 [2024-12-13 06:49:59.267981] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.763 [2024-12-13 06:49:59.276622] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.763 [2024-12-13 06:49:59.276817] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.022 [2024-12-13 06:49:59.289354] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.022 [2024-12-13 06:49:59.289627] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.022 [2024-12-13 06:49:59.299790] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.022 [2024-12-13 06:49:59.300093] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.022 [2024-12-13 06:49:59.310166] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.022 [2024-12-13 06:49:59.310429] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.022 [2024-12-13 06:49:59.319986] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.022 [2024-12-13 06:49:59.320278] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.022 [2024-12-13 06:49:59.330784] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.022 [2024-12-13 06:49:59.331088] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.022 [2024-12-13 06:49:59.342099] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.022 [2024-12-13 06:49:59.342310] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.022 [2024-12-13 06:49:59.352656] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.022 [2024-12-13 06:49:59.352851] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.022 [2024-12-13 06:49:59.363046] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.022 [2024-12-13 06:49:59.363228] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.022 [2024-12-13 06:49:59.373175] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.022 [2024-12-13 06:49:59.373339] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.022 [2024-12-13 06:49:59.382939] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.023 [2024-12-13 06:49:59.383116] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.023 [2024-12-13 06:49:59.393315] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.023 [2024-12-13 06:49:59.393523] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.023 [2024-12-13 06:49:59.405237] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.023 [2024-12-13 06:49:59.405463] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.023 [2024-12-13 06:49:59.414137] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.023 [2024-12-13 06:49:59.414169] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.023 [2024-12-13 06:49:59.424936] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.023 [2024-12-13 06:49:59.425136] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.023 [2024-12-13 06:49:59.441822] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.023 [2024-12-13 06:49:59.442012] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.023 [2024-12-13 06:49:59.458476] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.023 [2024-12-13 06:49:59.458623] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.023 [2024-12-13 06:49:59.477310] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.023 [2024-12-13 06:49:59.477524] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.023 [2024-12-13 06:49:59.492731] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.023 [2024-12-13 06:49:59.492959] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.023 [2024-12-13 06:49:59.502137] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.023 [2024-12-13 06:49:59.502317] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.023 [2024-12-13 06:49:59.513823] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.023 [2024-12-13 06:49:59.514004] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.023 [2024-12-13 06:49:59.524051] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.023 [2024-12-13 06:49:59.524203] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.023 [2024-12-13 06:49:59.534162] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.023 [2024-12-13 06:49:59.534340] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.281 [2024-12-13 06:49:59.545059] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.281 [2024-12-13 06:49:59.545265] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.281 [2024-12-13 06:49:59.557045] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.281 [2024-12-13 06:49:59.557223] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.281 [2024-12-13 06:49:59.574225] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.281 [2024-12-13 06:49:59.574418] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.282 [2024-12-13 06:49:59.589691] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.282 [2024-12-13 06:49:59.589872] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.282 [2024-12-13 06:49:59.599237] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.282 [2024-12-13 06:49:59.599431] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.282 [2024-12-13 06:49:59.609711] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.282 [2024-12-13 06:49:59.609888] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.282 [2024-12-13 06:49:59.619659] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.282 [2024-12-13 06:49:59.619853] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.282 [2024-12-13 06:49:59.629922] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.282 [2024-12-13 06:49:59.630101] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.282 [2024-12-13 06:49:59.639649] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.282 [2024-12-13 06:49:59.639841] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.282 [2024-12-13 06:49:59.653843] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.282 [2024-12-13 06:49:59.654038] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.282 [2024-12-13 06:49:59.662984] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.282 [2024-12-13 06:49:59.663163] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.282 [2024-12-13 06:49:59.674961] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.282 [2024-12-13 06:49:59.675142] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.282 [2024-12-13 06:49:59.685989] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.282 [2024-12-13 06:49:59.686165] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.282 [2024-12-13 06:49:59.694883] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.282 [2024-12-13 06:49:59.695059] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.282 [2024-12-13 06:49:59.709740] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.282 [2024-12-13 06:49:59.709964] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.282 [2024-12-13 06:49:59.733228] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.282 [2024-12-13 06:49:59.733424] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.282 [2024-12-13 06:49:59.742251] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.282 [2024-12-13 06:49:59.742441] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.282 [2024-12-13 06:49:59.754388] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.282 [2024-12-13 06:49:59.754587] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.282 [2024-12-13 06:49:59.765432] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.282 [2024-12-13 06:49:59.765465] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.282 [2024-12-13 06:49:59.773847] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.282 [2024-12-13 06:49:59.773880] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.282 [2024-12-13 06:49:59.785687] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.282 [2024-12-13 06:49:59.785719] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.282 [2024-12-13 06:49:59.797035] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.282 [2024-12-13 06:49:59.797240] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.541 [2024-12-13 06:49:59.806672] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.541 [2024-12-13 06:49:59.806723] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.541 [2024-12-13 06:49:59.817465] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.541 [2024-12-13 06:49:59.817498] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.541 [2024-12-13 06:49:59.829596] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.541 [2024-12-13 06:49:59.829629] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.541 [2024-12-13 06:49:59.840994] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.541 [2024-12-13 06:49:59.841026] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.541 [2024-12-13 06:49:59.849132] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.542 [2024-12-13 06:49:59.849164] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.542 [2024-12-13 06:49:59.860887] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.542 [2024-12-13 06:49:59.860919] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.542 [2024-12-13 06:49:59.872395] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.542 [2024-12-13 06:49:59.872616] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.542 [2024-12-13 06:49:59.882002] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.542 [2024-12-13 06:49:59.882180] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.542 [2024-12-13 06:49:59.892650] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.542 [2024-12-13 06:49:59.892814] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.542 [2024-12-13 06:49:59.905188] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.542 [2024-12-13 06:49:59.905416] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.542 [2024-12-13 06:49:59.914204] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.542 [2024-12-13 06:49:59.914432] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.542 [2024-12-13 06:49:59.926284] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.542 [2024-12-13 06:49:59.926506] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.542 [2024-12-13 06:49:59.936092] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.542 [2024-12-13 06:49:59.936326] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.542 [2024-12-13 06:49:59.946072] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.542 [2024-12-13 06:49:59.946240] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.542 [2024-12-13 06:49:59.956414] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.542 [2024-12-13 06:49:59.956610] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.542 [2024-12-13 06:49:59.966262] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.542 [2024-12-13 06:49:59.966452] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.542 [2024-12-13 06:49:59.976101] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.542 [2024-12-13 06:49:59.976299] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.542 [2024-12-13 06:49:59.990847] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.542 [2024-12-13 06:49:59.991035] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.542 [2024-12-13 06:50:00.001673] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.542 [2024-12-13 06:50:00.001866] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.542 [2024-12-13 06:50:00.017390] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.542 [2024-12-13 06:50:00.017625] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.542 [2024-12-13 06:50:00.033238] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.542 [2024-12-13 06:50:00.033479] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.542 [2024-12-13 06:50:00.042721] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.542 [2024-12-13 06:50:00.042897] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.542 [2024-12-13 06:50:00.054683] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.542 [2024-12-13 06:50:00.054876] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.800 [2024-12-13 06:50:00.065625] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.800 [2024-12-13 06:50:00.065825] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.800 [2024-12-13 06:50:00.075780] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.800 [2024-12-13 06:50:00.075985] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.801 [2024-12-13 06:50:00.085742] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.801 [2024-12-13 06:50:00.085934] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.801 [2024-12-13 06:50:00.095396] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.801 [2024-12-13 06:50:00.095575] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.801 [2024-12-13 06:50:00.109804] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.801 [2024-12-13 06:50:00.109982] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.801 [2024-12-13 06:50:00.118845] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.801 [2024-12-13 06:50:00.118878] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.801 [2024-12-13 06:50:00.130699] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.801 [2024-12-13 06:50:00.130748] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.801 [2024-12-13 06:50:00.146767] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.801 [2024-12-13 06:50:00.146801] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.801 [2024-12-13 06:50:00.161164] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.801 [2024-12-13 06:50:00.161344] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.801 [2024-12-13 06:50:00.170924] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.801 [2024-12-13 06:50:00.170957] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.801 [2024-12-13 06:50:00.182181] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.801 [2024-12-13 06:50:00.182213] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.801 [2024-12-13 06:50:00.192460] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.801 [2024-12-13 06:50:00.192492] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.801 [2024-12-13 06:50:00.201857] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.801 [2024-12-13 06:50:00.201889] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.801 [2024-12-13 06:50:00.211443] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.801 [2024-12-13 06:50:00.211475] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.801 [2024-12-13 06:50:00.220711] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.801 [2024-12-13 06:50:00.220743] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.801 [2024-12-13 06:50:00.230633] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.801 [2024-12-13 06:50:00.230666] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.801 [2024-12-13 06:50:00.240563] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.801 [2024-12-13 06:50:00.240594] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.801 [2024-12-13 06:50:00.250310] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.801 [2024-12-13 06:50:00.250525] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.801 [2024-12-13 06:50:00.260817] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.801 [2024-12-13 06:50:00.260977] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.801 [2024-12-13 06:50:00.273378] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.801 [2024-12-13 06:50:00.273562] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.801 [2024-12-13 06:50:00.281634] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.801 [2024-12-13 06:50:00.281827] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.801 [2024-12-13 06:50:00.291392] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.801 [2024-12-13 06:50:00.291557] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.801 [2024-12-13 06:50:00.301283] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.801 [2024-12-13 06:50:00.301490] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.801 [2024-12-13 06:50:00.310986] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.801 [2024-12-13 06:50:00.311167] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.060 [2024-12-13 06:50:00.322211] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.060 [2024-12-13 06:50:00.322423] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.060 [2024-12-13 06:50:00.334985] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.060 [2024-12-13 06:50:00.335165] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.060 [2024-12-13 06:50:00.346254] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.060 [2024-12-13 06:50:00.346447] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.060 [2024-12-13 06:50:00.355322] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.060 [2024-12-13 06:50:00.355634] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.060 [2024-12-13 06:50:00.369864] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.060 [2024-12-13 06:50:00.370156] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.060 [2024-12-13 06:50:00.379219] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.060 [2024-12-13 06:50:00.379550] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.060 [2024-12-13 06:50:00.391330] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.060 [2024-12-13 06:50:00.391659] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.060 [2024-12-13 06:50:00.400426] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.061 [2024-12-13 06:50:00.400652] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.061 [2024-12-13 06:50:00.413048] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.061 [2024-12-13 06:50:00.413333] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.061 [2024-12-13 06:50:00.422765] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.061 [2024-12-13 06:50:00.423024] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.061 [2024-12-13 06:50:00.433068] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.061 [2024-12-13 06:50:00.433238] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.061 [2024-12-13 06:50:00.443519] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.061 [2024-12-13 06:50:00.443792] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.061 [2024-12-13 06:50:00.455563] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.061 [2024-12-13 06:50:00.455741] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.061 [2024-12-13 06:50:00.464412] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.061 [2024-12-13 06:50:00.464586] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.061 [2024-12-13 06:50:00.478411] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.061 [2024-12-13 06:50:00.478593] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.061 [2024-12-13 06:50:00.493978] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.061 [2024-12-13 06:50:00.494146] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.061 [2024-12-13 06:50:00.511404] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.061 [2024-12-13 06:50:00.511580] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.061 [2024-12-13 06:50:00.522056] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.061 [2024-12-13 06:50:00.522228] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.061 [2024-12-13 06:50:00.536978] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.061 [2024-12-13 06:50:00.537125] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.061 [2024-12-13 06:50:00.552724] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.061 [2024-12-13 06:50:00.552953] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.061 [2024-12-13 06:50:00.562995] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.061 [2024-12-13 06:50:00.563143] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.061 [2024-12-13 06:50:00.574977] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.061 [2024-12-13 06:50:00.575130] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.320 [2024-12-13 06:50:00.586586] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.320 [2024-12-13 06:50:00.586753] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.320 [2024-12-13 06:50:00.599231] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.320 [2024-12-13 06:50:00.599440] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.320 [2024-12-13 06:50:00.608243] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.320 [2024-12-13 06:50:00.608449] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.320 [2024-12-13 06:50:00.623519] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.320 [2024-12-13 06:50:00.623709] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.320 [2024-12-13 06:50:00.639319] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.320 [2024-12-13 06:50:00.639528] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.320 [2024-12-13 06:50:00.656303] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.320 [2024-12-13 06:50:00.656523] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.320 [2024-12-13 06:50:00.667187] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.320 [2024-12-13 06:50:00.667410] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.320 [2024-12-13 06:50:00.675788] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.320 [2024-12-13 06:50:00.675988] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.320 [2024-12-13 06:50:00.687548] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.320 [2024-12-13 06:50:00.687741] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.320 [2024-12-13 06:50:00.697419] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.320 [2024-12-13 06:50:00.697671] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.320 [2024-12-13 06:50:00.707876] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.320 [2024-12-13 06:50:00.708136] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.320 [2024-12-13 06:50:00.718433] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.320 [2024-12-13 06:50:00.718671] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.320 [2024-12-13 06:50:00.728741] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.320 [2024-12-13 06:50:00.728958] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.320 [2024-12-13 06:50:00.743081] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.320 [2024-12-13 06:50:00.743408] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.320 [2024-12-13 06:50:00.751844] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.320 [2024-12-13 06:50:00.752109] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.320 [2024-12-13 06:50:00.762862] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.320 [2024-12-13 06:50:00.763051] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.321 [2024-12-13 06:50:00.775712] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.321 [2024-12-13 06:50:00.775982] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.321 [2024-12-13 06:50:00.784610] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.321 [2024-12-13 06:50:00.784805] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.321 [2024-12-13 06:50:00.800219] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.321 [2024-12-13 06:50:00.800464] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.321 [2024-12-13 06:50:00.808986] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.321 [2024-12-13 06:50:00.809163] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.321 [2024-12-13 06:50:00.820993] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.321 [2024-12-13 06:50:00.821172] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.321 [2024-12-13 06:50:00.830838] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.321 [2024-12-13 06:50:00.831031] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.580 [2024-12-13 06:50:00.841666] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.580 [2024-12-13 06:50:00.841896] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.580 [2024-12-13 06:50:00.853783] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.580 [2024-12-13 06:50:00.853963] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.580 [2024-12-13 06:50:00.861731] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.580 [2024-12-13 06:50:00.861909] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.580 [2024-12-13 06:50:00.874164] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.580 [2024-12-13 06:50:00.874327] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.580 [2024-12-13 06:50:00.885205] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.580 [2024-12-13 06:50:00.885410] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.580 [2024-12-13 06:50:00.893225] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.580 [2024-12-13 06:50:00.893412] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.580 [2024-12-13 06:50:00.905632] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.580 [2024-12-13 06:50:00.905808] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.580 [2024-12-13 06:50:00.914721] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.580 [2024-12-13 06:50:00.914753] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.580 [2024-12-13 06:50:00.924689] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.580 [2024-12-13 06:50:00.924884] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.580 [2024-12-13 06:50:00.934612] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.580 [2024-12-13 06:50:00.934645] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.580 [2024-12-13 06:50:00.944210] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.580 [2024-12-13 06:50:00.944419] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.580 [2024-12-13 06:50:00.954075] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.580 [2024-12-13 06:50:00.954108] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.580 [2024-12-13 06:50:00.964089] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.581 [2024-12-13 06:50:00.964323] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.581 [2024-12-13 06:50:00.975163] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.581 [2024-12-13 06:50:00.975198] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.581 [2024-12-13 06:50:00.991616] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.581 [2024-12-13 06:50:00.991649] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.581 [2024-12-13 06:50:01.000623] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.581 [2024-12-13 06:50:01.000657] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.581 [2024-12-13 06:50:01.013985] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.581 [2024-12-13 06:50:01.014162] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.581 [2024-12-13 06:50:01.025075] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.581 [2024-12-13 06:50:01.025278] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.581 [2024-12-13 06:50:01.035685] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.581 [2024-12-13 06:50:01.035881] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.581 [2024-12-13 06:50:01.046089] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.581 [2024-12-13 06:50:01.046121] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.581 [2024-12-13 06:50:01.056048] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.581 [2024-12-13 06:50:01.056087] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.581 [2024-12-13 06:50:01.065605] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.581 [2024-12-13 06:50:01.065637] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.581 [2024-12-13 06:50:01.075505] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.581 [2024-12-13 06:50:01.075537] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.581 [2024-12-13 06:50:01.085225] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.581 [2024-12-13 06:50:01.085258] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.581 [2024-12-13 06:50:01.095548] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.581 [2024-12-13 06:50:01.095584] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.840 [2024-12-13 06:50:01.106528] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.840 [2024-12-13 06:50:01.106563] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.840 [2024-12-13 06:50:01.116191] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.840 [2024-12-13 06:50:01.116244] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.840 [2024-12-13 06:50:01.126257] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.840 [2024-12-13 06:50:01.126290] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.840 [2024-12-13 06:50:01.135900] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.840 [2024-12-13 06:50:01.135974] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.840 [2024-12-13 06:50:01.145354] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.840 [2024-12-13 06:50:01.145412] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.840 [2024-12-13 06:50:01.155324] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.840 [2024-12-13 06:50:01.155382] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.840 [2024-12-13 06:50:01.166032] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.840 [2024-12-13 06:50:01.166065] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.840 [2024-12-13 06:50:01.178146] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.840 [2024-12-13 06:50:01.178179] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.840 [2024-12-13 06:50:01.187018] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.840 [2024-12-13 06:50:01.187050] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.840 [2024-12-13 06:50:01.200567] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.840 [2024-12-13 06:50:01.200601] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.840 [2024-12-13 06:50:01.209874] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.840 [2024-12-13 06:50:01.209908] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.840 [2024-12-13 06:50:01.224008] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.840 [2024-12-13 06:50:01.224207] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.840 [2024-12-13 06:50:01.233861] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.840 [2024-12-13 06:50:01.234058] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.840 [2024-12-13 06:50:01.244396] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.840 [2024-12-13 06:50:01.244594] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.840 [2024-12-13 06:50:01.256181] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.840 [2024-12-13 06:50:01.256399] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.840 [2024-12-13 06:50:01.265064] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.840 [2024-12-13 06:50:01.265244] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.840 [2024-12-13 06:50:01.275432] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.840 [2024-12-13 06:50:01.275623] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.840 [2024-12-13 06:50:01.286150] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.840 [2024-12-13 06:50:01.286329] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.840 [2024-12-13 06:50:01.297337] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.840 [2024-12-13 06:50:01.297542] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.840 [2024-12-13 06:50:01.303605] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.840 [2024-12-13 06:50:01.303797] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.840 00:10:56.840 Latency(us) 00:10:56.840 [2024-12-13T06:50:01.359Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:56.840 [2024-12-13T06:50:01.359Z] Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:56.840 Nvme1n1 : 5.01 12454.63 97.30 0.00 0.00 10266.26 4081.11 21686.46 00:10:56.840 [2024-12-13T06:50:01.359Z] =================================================================================================================== 00:10:56.840 [2024-12-13T06:50:01.359Z] Total : 12454.63 97.30 0.00 0.00 10266.26 4081.11 21686.46 00:10:56.840 [2024-12-13 06:50:01.311606] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.840 [2024-12-13 06:50:01.311799] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.840 [2024-12-13 06:50:01.319602] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.840 [2024-12-13 06:50:01.319771] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.840 [2024-12-13 06:50:01.331647] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.840 [2024-12-13 06:50:01.332003] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.840 [2024-12-13 06:50:01.339639] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.840 [2024-12-13 06:50:01.339987] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.840 [2024-12-13 06:50:01.347639] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.840 [2024-12-13 06:50:01.347976] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.840 [2024-12-13 06:50:01.355671] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.840 [2024-12-13 06:50:01.356004] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.098 [2024-12-13 06:50:01.367664] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.098 [2024-12-13 06:50:01.367735] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.098 [2024-12-13 06:50:01.375635] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.098 [2024-12-13 06:50:01.375673] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.098 [2024-12-13 06:50:01.383615] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.098 [2024-12-13 06:50:01.383818] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.098 [2024-12-13 06:50:01.391643] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.098 [2024-12-13 06:50:01.391680] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.098 [2024-12-13 06:50:01.403643] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.098 [2024-12-13 06:50:01.403682] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.098 [2024-12-13 06:50:01.419648] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.098 [2024-12-13 06:50:01.419964] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.099 [2024-12-13 06:50:01.427630] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.099 [2024-12-13 06:50:01.427813] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.099 [2024-12-13 06:50:01.435631] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.099 [2024-12-13 06:50:01.435829] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.099 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (74751) - No such process 00:10:57.099 06:50:01 -- target/zcopy.sh@49 -- # wait 74751 00:10:57.099 06:50:01 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:57.099 06:50:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.099 06:50:01 -- common/autotest_common.sh@10 -- # set +x 00:10:57.099 06:50:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.099 06:50:01 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:57.099 06:50:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.099 06:50:01 -- common/autotest_common.sh@10 -- # set +x 00:10:57.099 delay0 00:10:57.099 06:50:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.099 06:50:01 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:57.099 06:50:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.099 06:50:01 -- common/autotest_common.sh@10 -- # set +x 00:10:57.099 06:50:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.099 06:50:01 -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:57.358 [2024-12-13 06:50:01.636300] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:03.923 Initializing NVMe Controllers 00:11:03.923 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:03.923 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:03.923 Initialization complete. Launching workers. 00:11:03.923 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 152 00:11:03.923 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 439, failed to submit 33 00:11:03.923 success 340, unsuccess 99, failed 0 00:11:03.923 06:50:07 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:11:03.923 06:50:07 -- target/zcopy.sh@60 -- # nvmftestfini 00:11:03.923 06:50:07 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:03.923 06:50:07 -- nvmf/common.sh@116 -- # sync 00:11:03.923 06:50:07 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:03.923 06:50:07 -- nvmf/common.sh@119 -- # set +e 00:11:03.923 06:50:07 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:03.923 06:50:07 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:03.923 rmmod nvme_tcp 00:11:03.923 rmmod nvme_fabrics 00:11:03.923 rmmod nvme_keyring 00:11:03.923 06:50:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:03.923 06:50:07 -- nvmf/common.sh@123 -- # set -e 00:11:03.923 06:50:07 -- nvmf/common.sh@124 -- # return 0 00:11:03.923 06:50:07 -- nvmf/common.sh@477 -- # '[' -n 74614 ']' 00:11:03.923 06:50:07 -- nvmf/common.sh@478 -- # killprocess 74614 00:11:03.923 06:50:07 -- common/autotest_common.sh@936 -- # '[' -z 74614 ']' 00:11:03.923 06:50:07 -- common/autotest_common.sh@940 -- # kill -0 74614 00:11:03.923 06:50:07 -- common/autotest_common.sh@941 -- # uname 00:11:03.923 06:50:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:03.923 06:50:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74614 00:11:03.923 06:50:07 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:03.923 06:50:07 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:03.923 killing process with pid 74614 00:11:03.923 06:50:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74614' 00:11:03.923 06:50:07 -- common/autotest_common.sh@955 -- # kill 74614 00:11:03.923 06:50:07 -- common/autotest_common.sh@960 -- # wait 74614 00:11:03.923 06:50:07 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:03.923 06:50:07 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:03.923 06:50:07 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:03.923 06:50:07 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:03.923 06:50:07 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:03.923 06:50:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:03.923 06:50:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:03.923 06:50:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:03.923 06:50:08 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:03.923 00:11:03.923 real 0m23.584s 00:11:03.923 user 0m38.801s 00:11:03.923 sys 0m6.597s 00:11:03.923 ************************************ 00:11:03.923 END TEST nvmf_zcopy 00:11:03.923 ************************************ 00:11:03.923 06:50:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:03.923 06:50:08 -- common/autotest_common.sh@10 -- # set +x 00:11:03.923 06:50:08 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:03.923 06:50:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:03.923 06:50:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:03.923 06:50:08 -- common/autotest_common.sh@10 -- # set +x 00:11:03.923 ************************************ 00:11:03.923 START TEST nvmf_nmic 00:11:03.923 ************************************ 00:11:03.923 06:50:08 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:03.923 * Looking for test storage... 00:11:03.923 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:03.923 06:50:08 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:03.923 06:50:08 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:03.923 06:50:08 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:03.923 06:50:08 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:03.923 06:50:08 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:03.923 06:50:08 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:03.924 06:50:08 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:03.924 06:50:08 -- scripts/common.sh@335 -- # IFS=.-: 00:11:03.924 06:50:08 -- scripts/common.sh@335 -- # read -ra ver1 00:11:03.924 06:50:08 -- scripts/common.sh@336 -- # IFS=.-: 00:11:03.924 06:50:08 -- scripts/common.sh@336 -- # read -ra ver2 00:11:03.924 06:50:08 -- scripts/common.sh@337 -- # local 'op=<' 00:11:03.924 06:50:08 -- scripts/common.sh@339 -- # ver1_l=2 00:11:03.924 06:50:08 -- scripts/common.sh@340 -- # ver2_l=1 00:11:03.924 06:50:08 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:03.924 06:50:08 -- scripts/common.sh@343 -- # case "$op" in 00:11:03.924 06:50:08 -- scripts/common.sh@344 -- # : 1 00:11:03.924 06:50:08 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:03.924 06:50:08 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:03.924 06:50:08 -- scripts/common.sh@364 -- # decimal 1 00:11:03.924 06:50:08 -- scripts/common.sh@352 -- # local d=1 00:11:03.924 06:50:08 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:03.924 06:50:08 -- scripts/common.sh@354 -- # echo 1 00:11:03.924 06:50:08 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:03.924 06:50:08 -- scripts/common.sh@365 -- # decimal 2 00:11:03.924 06:50:08 -- scripts/common.sh@352 -- # local d=2 00:11:03.924 06:50:08 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:03.924 06:50:08 -- scripts/common.sh@354 -- # echo 2 00:11:03.924 06:50:08 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:03.924 06:50:08 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:03.924 06:50:08 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:03.924 06:50:08 -- scripts/common.sh@367 -- # return 0 00:11:03.924 06:50:08 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:03.924 06:50:08 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:03.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.924 --rc genhtml_branch_coverage=1 00:11:03.924 --rc genhtml_function_coverage=1 00:11:03.924 --rc genhtml_legend=1 00:11:03.924 --rc geninfo_all_blocks=1 00:11:03.924 --rc geninfo_unexecuted_blocks=1 00:11:03.924 00:11:03.924 ' 00:11:03.924 06:50:08 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:03.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.924 --rc genhtml_branch_coverage=1 00:11:03.924 --rc genhtml_function_coverage=1 00:11:03.924 --rc genhtml_legend=1 00:11:03.924 --rc geninfo_all_blocks=1 00:11:03.924 --rc geninfo_unexecuted_blocks=1 00:11:03.924 00:11:03.924 ' 00:11:03.924 06:50:08 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:03.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.924 --rc genhtml_branch_coverage=1 00:11:03.924 --rc genhtml_function_coverage=1 00:11:03.924 --rc genhtml_legend=1 00:11:03.924 --rc geninfo_all_blocks=1 00:11:03.924 --rc geninfo_unexecuted_blocks=1 00:11:03.924 00:11:03.924 ' 00:11:03.924 06:50:08 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:03.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.924 --rc genhtml_branch_coverage=1 00:11:03.924 --rc genhtml_function_coverage=1 00:11:03.924 --rc genhtml_legend=1 00:11:03.924 --rc geninfo_all_blocks=1 00:11:03.924 --rc geninfo_unexecuted_blocks=1 00:11:03.924 00:11:03.924 ' 00:11:03.924 06:50:08 -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:03.924 06:50:08 -- nvmf/common.sh@7 -- # uname -s 00:11:03.924 06:50:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:03.924 06:50:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:03.924 06:50:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:03.924 06:50:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:03.924 06:50:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:03.924 06:50:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:03.924 06:50:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:03.924 06:50:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:03.924 06:50:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:03.924 06:50:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:03.924 06:50:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:657f0c9c-3891-4064-9841-3d87a573b6e7 00:11:03.924 06:50:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=657f0c9c-3891-4064-9841-3d87a573b6e7 00:11:03.924 06:50:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:03.924 06:50:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:03.924 06:50:08 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:03.924 06:50:08 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:03.924 06:50:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:03.924 06:50:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:03.924 06:50:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:03.924 06:50:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.924 06:50:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.924 06:50:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.924 06:50:08 -- paths/export.sh@5 -- # export PATH 00:11:03.924 06:50:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.924 06:50:08 -- nvmf/common.sh@46 -- # : 0 00:11:03.924 06:50:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:03.924 06:50:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:03.924 06:50:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:03.924 06:50:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:03.924 06:50:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:03.924 06:50:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:03.924 06:50:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:03.924 06:50:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:03.924 06:50:08 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:03.924 06:50:08 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:03.924 06:50:08 -- target/nmic.sh@14 -- # nvmftestinit 00:11:03.924 06:50:08 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:03.924 06:50:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:03.924 06:50:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:03.924 06:50:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:03.924 06:50:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:03.924 06:50:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:03.924 06:50:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:03.924 06:50:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:03.924 06:50:08 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:03.924 06:50:08 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:03.924 06:50:08 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:03.924 06:50:08 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:03.924 06:50:08 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:03.924 06:50:08 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:03.924 06:50:08 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:03.924 06:50:08 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:03.924 06:50:08 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:03.924 06:50:08 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:03.924 06:50:08 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:03.924 06:50:08 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:03.924 06:50:08 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:03.924 06:50:08 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:03.924 06:50:08 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:03.924 06:50:08 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:03.924 06:50:08 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:03.924 06:50:08 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:03.924 06:50:08 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:03.924 06:50:08 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:03.924 Cannot find device "nvmf_tgt_br" 00:11:03.924 06:50:08 -- nvmf/common.sh@154 -- # true 00:11:03.924 06:50:08 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:03.924 Cannot find device "nvmf_tgt_br2" 00:11:03.924 06:50:08 -- nvmf/common.sh@155 -- # true 00:11:03.924 06:50:08 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:03.924 06:50:08 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:03.924 Cannot find device "nvmf_tgt_br" 00:11:03.924 06:50:08 -- nvmf/common.sh@157 -- # true 00:11:03.924 06:50:08 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:03.924 Cannot find device "nvmf_tgt_br2" 00:11:03.924 06:50:08 -- nvmf/common.sh@158 -- # true 00:11:03.924 06:50:08 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:03.924 06:50:08 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:03.924 06:50:08 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:03.924 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:04.184 06:50:08 -- nvmf/common.sh@161 -- # true 00:11:04.184 06:50:08 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:04.184 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:04.184 06:50:08 -- nvmf/common.sh@162 -- # true 00:11:04.184 06:50:08 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:04.184 06:50:08 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:04.184 06:50:08 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:04.184 06:50:08 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:04.184 06:50:08 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:04.184 06:50:08 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:04.184 06:50:08 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:04.184 06:50:08 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:04.184 06:50:08 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:04.184 06:50:08 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:04.184 06:50:08 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:04.184 06:50:08 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:04.184 06:50:08 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:04.184 06:50:08 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:04.184 06:50:08 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:04.184 06:50:08 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:04.184 06:50:08 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:04.184 06:50:08 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:04.184 06:50:08 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:04.184 06:50:08 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:04.184 06:50:08 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:04.184 06:50:08 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:04.184 06:50:08 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:04.184 06:50:08 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:04.184 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:04.184 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:11:04.184 00:11:04.184 --- 10.0.0.2 ping statistics --- 00:11:04.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:04.184 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:11:04.184 06:50:08 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:04.184 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:04.184 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:11:04.184 00:11:04.184 --- 10.0.0.3 ping statistics --- 00:11:04.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:04.184 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:11:04.184 06:50:08 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:04.184 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:04.184 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:11:04.184 00:11:04.184 --- 10.0.0.1 ping statistics --- 00:11:04.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:04.184 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:11:04.184 06:50:08 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:04.184 06:50:08 -- nvmf/common.sh@421 -- # return 0 00:11:04.184 06:50:08 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:04.184 06:50:08 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:04.184 06:50:08 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:04.184 06:50:08 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:04.184 06:50:08 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:04.184 06:50:08 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:04.184 06:50:08 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:04.184 06:50:08 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:11:04.184 06:50:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:04.184 06:50:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:04.184 06:50:08 -- common/autotest_common.sh@10 -- # set +x 00:11:04.184 06:50:08 -- nvmf/common.sh@469 -- # nvmfpid=75083 00:11:04.184 06:50:08 -- nvmf/common.sh@470 -- # waitforlisten 75083 00:11:04.184 06:50:08 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:04.184 06:50:08 -- common/autotest_common.sh@829 -- # '[' -z 75083 ']' 00:11:04.184 06:50:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:04.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:04.184 06:50:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:04.184 06:50:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:04.184 06:50:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:04.184 06:50:08 -- common/autotest_common.sh@10 -- # set +x 00:11:04.184 [2024-12-13 06:50:08.672648] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:04.184 [2024-12-13 06:50:08.673130] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:04.443 [2024-12-13 06:50:08.806595] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:04.443 [2024-12-13 06:50:08.839595] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:04.443 [2024-12-13 06:50:08.839998] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:04.443 [2024-12-13 06:50:08.840124] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:04.443 [2024-12-13 06:50:08.840218] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:04.443 [2024-12-13 06:50:08.840444] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:04.443 [2024-12-13 06:50:08.840579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:04.443 [2024-12-13 06:50:08.841307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.443 [2024-12-13 06:50:08.841256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:04.443 06:50:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:04.443 06:50:08 -- common/autotest_common.sh@862 -- # return 0 00:11:04.443 06:50:08 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:04.443 06:50:08 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:04.443 06:50:08 -- common/autotest_common.sh@10 -- # set +x 00:11:04.702 06:50:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:04.702 06:50:08 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:04.702 06:50:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.702 06:50:08 -- common/autotest_common.sh@10 -- # set +x 00:11:04.702 [2024-12-13 06:50:08.974270] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:04.702 06:50:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.702 06:50:08 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:04.702 06:50:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.702 06:50:08 -- common/autotest_common.sh@10 -- # set +x 00:11:04.702 Malloc0 00:11:04.702 06:50:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.702 06:50:09 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:04.702 06:50:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.702 06:50:09 -- common/autotest_common.sh@10 -- # set +x 00:11:04.702 06:50:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.702 06:50:09 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:04.702 06:50:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.702 06:50:09 -- common/autotest_common.sh@10 -- # set +x 00:11:04.702 06:50:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.702 06:50:09 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:04.702 06:50:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.702 06:50:09 -- common/autotest_common.sh@10 -- # set +x 00:11:04.702 [2024-12-13 06:50:09.034912] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:04.702 06:50:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.702 test case1: single bdev can't be used in multiple subsystems 00:11:04.702 06:50:09 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:11:04.702 06:50:09 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:11:04.702 06:50:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.702 06:50:09 -- common/autotest_common.sh@10 -- # set +x 00:11:04.702 06:50:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.702 06:50:09 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:04.702 06:50:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.702 06:50:09 -- common/autotest_common.sh@10 -- # set +x 00:11:04.702 06:50:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.702 06:50:09 -- target/nmic.sh@28 -- # nmic_status=0 00:11:04.702 06:50:09 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:11:04.702 06:50:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.702 06:50:09 -- common/autotest_common.sh@10 -- # set +x 00:11:04.702 [2024-12-13 06:50:09.058804] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:11:04.702 [2024-12-13 06:50:09.058937] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:11:04.702 [2024-12-13 06:50:09.059010] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.702 request: 00:11:04.702 { 00:11:04.702 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:04.702 "namespace": { 00:11:04.702 "bdev_name": "Malloc0" 00:11:04.702 }, 00:11:04.702 "method": "nvmf_subsystem_add_ns", 00:11:04.702 "req_id": 1 00:11:04.702 } 00:11:04.702 Got JSON-RPC error response 00:11:04.702 response: 00:11:04.702 { 00:11:04.702 "code": -32602, 00:11:04.702 "message": "Invalid parameters" 00:11:04.702 } 00:11:04.703 06:50:09 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:04.703 06:50:09 -- target/nmic.sh@29 -- # nmic_status=1 00:11:04.703 06:50:09 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:11:04.703 Adding namespace failed - expected result. 00:11:04.703 06:50:09 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:11:04.703 test case2: host connect to nvmf target in multiple paths 00:11:04.703 06:50:09 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:11:04.703 06:50:09 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:11:04.703 06:50:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.703 06:50:09 -- common/autotest_common.sh@10 -- # set +x 00:11:04.703 [2024-12-13 06:50:09.070911] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:11:04.703 06:50:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.703 06:50:09 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:657f0c9c-3891-4064-9841-3d87a573b6e7 --hostid=657f0c9c-3891-4064-9841-3d87a573b6e7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:04.703 06:50:09 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:657f0c9c-3891-4064-9841-3d87a573b6e7 --hostid=657f0c9c-3891-4064-9841-3d87a573b6e7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:11:04.961 06:50:09 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:11:04.961 06:50:09 -- common/autotest_common.sh@1187 -- # local i=0 00:11:04.961 06:50:09 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:11:04.961 06:50:09 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:11:04.961 06:50:09 -- common/autotest_common.sh@1194 -- # sleep 2 00:11:06.863 06:50:11 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:11:06.863 06:50:11 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:11:06.863 06:50:11 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:11:06.863 06:50:11 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:11:06.863 06:50:11 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:11:06.863 06:50:11 -- common/autotest_common.sh@1197 -- # return 0 00:11:06.863 06:50:11 -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:06.863 [global] 00:11:06.863 thread=1 00:11:06.863 invalidate=1 00:11:06.863 rw=write 00:11:06.863 time_based=1 00:11:06.863 runtime=1 00:11:06.863 ioengine=libaio 00:11:06.863 direct=1 00:11:06.863 bs=4096 00:11:06.863 iodepth=1 00:11:06.863 norandommap=0 00:11:06.863 numjobs=1 00:11:06.863 00:11:06.863 verify_dump=1 00:11:06.863 verify_backlog=512 00:11:06.863 verify_state_save=0 00:11:06.863 do_verify=1 00:11:06.863 verify=crc32c-intel 00:11:07.122 [job0] 00:11:07.122 filename=/dev/nvme0n1 00:11:07.122 Could not set queue depth (nvme0n1) 00:11:07.122 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:07.122 fio-3.35 00:11:07.122 Starting 1 thread 00:11:08.499 00:11:08.499 job0: (groupid=0, jobs=1): err= 0: pid=75167: Fri Dec 13 06:50:12 2024 00:11:08.499 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:11:08.499 slat (nsec): min=11395, max=59723, avg=15237.23, stdev=5293.71 00:11:08.499 clat (usec): min=125, max=624, avg=175.70, stdev=26.81 00:11:08.499 lat (usec): min=137, max=637, avg=190.93, stdev=27.85 00:11:08.499 clat percentiles (usec): 00:11:08.499 | 1.00th=[ 135], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 155], 00:11:08.499 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 172], 60.00th=[ 178], 00:11:08.499 | 70.00th=[ 186], 80.00th=[ 196], 90.00th=[ 210], 95.00th=[ 223], 00:11:08.499 | 99.00th=[ 249], 99.50th=[ 265], 99.90th=[ 289], 99.95th=[ 506], 00:11:08.499 | 99.99th=[ 627] 00:11:08.499 write: IOPS=3099, BW=12.1MiB/s (12.7MB/s)(12.1MiB/1001msec); 0 zone resets 00:11:08.499 slat (usec): min=13, max=112, avg=22.11, stdev= 7.22 00:11:08.499 clat (usec): min=77, max=234, avg=107.59, stdev=18.19 00:11:08.499 lat (usec): min=94, max=294, avg=129.70, stdev=20.55 00:11:08.499 clat percentiles (usec): 00:11:08.499 | 1.00th=[ 82], 5.00th=[ 86], 10.00th=[ 89], 20.00th=[ 93], 00:11:08.499 | 30.00th=[ 97], 40.00th=[ 100], 50.00th=[ 103], 60.00th=[ 109], 00:11:08.499 | 70.00th=[ 114], 80.00th=[ 122], 90.00th=[ 135], 95.00th=[ 143], 00:11:08.499 | 99.00th=[ 163], 99.50th=[ 174], 99.90th=[ 192], 99.95th=[ 212], 00:11:08.499 | 99.99th=[ 235] 00:11:08.499 bw ( KiB/s): min=12480, max=12480, per=100.00%, avg=12480.00, stdev= 0.00, samples=1 00:11:08.499 iops : min= 3120, max= 3120, avg=3120.00, stdev= 0.00, samples=1 00:11:08.499 lat (usec) : 100=20.79%, 250=78.77%, 500=0.40%, 750=0.03% 00:11:08.499 cpu : usr=3.00%, sys=8.50%, ctx=6175, majf=0, minf=5 00:11:08.499 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:08.499 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.499 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.499 issued rwts: total=3072,3103,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:08.499 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:08.499 00:11:08.499 Run status group 0 (all jobs): 00:11:08.499 READ: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:11:08.499 WRITE: bw=12.1MiB/s (12.7MB/s), 12.1MiB/s-12.1MiB/s (12.7MB/s-12.7MB/s), io=12.1MiB (12.7MB), run=1001-1001msec 00:11:08.499 00:11:08.499 Disk stats (read/write): 00:11:08.499 nvme0n1: ios=2646/3072, merge=0/0, ticks=503/385, in_queue=888, util=91.28% 00:11:08.499 06:50:12 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:08.499 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:08.499 06:50:12 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:08.499 06:50:12 -- common/autotest_common.sh@1208 -- # local i=0 00:11:08.499 06:50:12 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:11:08.499 06:50:12 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:08.499 06:50:12 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:08.499 06:50:12 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:11:08.499 06:50:12 -- common/autotest_common.sh@1220 -- # return 0 00:11:08.499 06:50:12 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:08.499 06:50:12 -- target/nmic.sh@53 -- # nvmftestfini 00:11:08.499 06:50:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:08.499 06:50:12 -- nvmf/common.sh@116 -- # sync 00:11:08.499 06:50:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:08.499 06:50:12 -- nvmf/common.sh@119 -- # set +e 00:11:08.499 06:50:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:08.499 06:50:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:08.499 rmmod nvme_tcp 00:11:08.499 rmmod nvme_fabrics 00:11:08.499 rmmod nvme_keyring 00:11:08.499 06:50:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:08.499 06:50:12 -- nvmf/common.sh@123 -- # set -e 00:11:08.499 06:50:12 -- nvmf/common.sh@124 -- # return 0 00:11:08.499 06:50:12 -- nvmf/common.sh@477 -- # '[' -n 75083 ']' 00:11:08.499 06:50:12 -- nvmf/common.sh@478 -- # killprocess 75083 00:11:08.499 06:50:12 -- common/autotest_common.sh@936 -- # '[' -z 75083 ']' 00:11:08.499 06:50:12 -- common/autotest_common.sh@940 -- # kill -0 75083 00:11:08.499 06:50:12 -- common/autotest_common.sh@941 -- # uname 00:11:08.499 06:50:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:08.499 06:50:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75083 00:11:08.499 06:50:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:08.499 06:50:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:08.499 killing process with pid 75083 00:11:08.499 06:50:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75083' 00:11:08.499 06:50:12 -- common/autotest_common.sh@955 -- # kill 75083 00:11:08.499 06:50:12 -- common/autotest_common.sh@960 -- # wait 75083 00:11:08.758 06:50:13 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:08.758 06:50:13 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:08.758 06:50:13 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:08.758 06:50:13 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:08.758 06:50:13 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:08.758 06:50:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:08.758 06:50:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:08.758 06:50:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:08.758 06:50:13 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:08.758 00:11:08.758 real 0m4.970s 00:11:08.758 user 0m15.361s 00:11:08.758 sys 0m2.204s 00:11:08.758 06:50:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:08.758 06:50:13 -- common/autotest_common.sh@10 -- # set +x 00:11:08.758 ************************************ 00:11:08.758 END TEST nvmf_nmic 00:11:08.758 ************************************ 00:11:08.758 06:50:13 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:08.758 06:50:13 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:08.758 06:50:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:08.758 06:50:13 -- common/autotest_common.sh@10 -- # set +x 00:11:08.758 ************************************ 00:11:08.758 START TEST nvmf_fio_target 00:11:08.758 ************************************ 00:11:08.758 06:50:13 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:08.758 * Looking for test storage... 00:11:08.758 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:08.758 06:50:13 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:08.758 06:50:13 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:08.758 06:50:13 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:08.758 06:50:13 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:08.758 06:50:13 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:08.758 06:50:13 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:08.758 06:50:13 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:08.758 06:50:13 -- scripts/common.sh@335 -- # IFS=.-: 00:11:08.758 06:50:13 -- scripts/common.sh@335 -- # read -ra ver1 00:11:08.758 06:50:13 -- scripts/common.sh@336 -- # IFS=.-: 00:11:08.758 06:50:13 -- scripts/common.sh@336 -- # read -ra ver2 00:11:08.758 06:50:13 -- scripts/common.sh@337 -- # local 'op=<' 00:11:08.758 06:50:13 -- scripts/common.sh@339 -- # ver1_l=2 00:11:08.758 06:50:13 -- scripts/common.sh@340 -- # ver2_l=1 00:11:08.758 06:50:13 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:08.758 06:50:13 -- scripts/common.sh@343 -- # case "$op" in 00:11:08.758 06:50:13 -- scripts/common.sh@344 -- # : 1 00:11:08.759 06:50:13 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:08.759 06:50:13 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:08.759 06:50:13 -- scripts/common.sh@364 -- # decimal 1 00:11:08.759 06:50:13 -- scripts/common.sh@352 -- # local d=1 00:11:09.018 06:50:13 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:09.018 06:50:13 -- scripts/common.sh@354 -- # echo 1 00:11:09.018 06:50:13 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:09.018 06:50:13 -- scripts/common.sh@365 -- # decimal 2 00:11:09.018 06:50:13 -- scripts/common.sh@352 -- # local d=2 00:11:09.018 06:50:13 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:09.018 06:50:13 -- scripts/common.sh@354 -- # echo 2 00:11:09.018 06:50:13 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:09.018 06:50:13 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:09.018 06:50:13 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:09.018 06:50:13 -- scripts/common.sh@367 -- # return 0 00:11:09.018 06:50:13 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:09.018 06:50:13 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:09.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.018 --rc genhtml_branch_coverage=1 00:11:09.018 --rc genhtml_function_coverage=1 00:11:09.018 --rc genhtml_legend=1 00:11:09.018 --rc geninfo_all_blocks=1 00:11:09.018 --rc geninfo_unexecuted_blocks=1 00:11:09.018 00:11:09.018 ' 00:11:09.018 06:50:13 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:09.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.018 --rc genhtml_branch_coverage=1 00:11:09.018 --rc genhtml_function_coverage=1 00:11:09.018 --rc genhtml_legend=1 00:11:09.018 --rc geninfo_all_blocks=1 00:11:09.018 --rc geninfo_unexecuted_blocks=1 00:11:09.018 00:11:09.018 ' 00:11:09.018 06:50:13 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:09.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.018 --rc genhtml_branch_coverage=1 00:11:09.018 --rc genhtml_function_coverage=1 00:11:09.018 --rc genhtml_legend=1 00:11:09.018 --rc geninfo_all_blocks=1 00:11:09.018 --rc geninfo_unexecuted_blocks=1 00:11:09.018 00:11:09.018 ' 00:11:09.018 06:50:13 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:09.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.018 --rc genhtml_branch_coverage=1 00:11:09.018 --rc genhtml_function_coverage=1 00:11:09.018 --rc genhtml_legend=1 00:11:09.018 --rc geninfo_all_blocks=1 00:11:09.018 --rc geninfo_unexecuted_blocks=1 00:11:09.018 00:11:09.018 ' 00:11:09.018 06:50:13 -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:09.018 06:50:13 -- nvmf/common.sh@7 -- # uname -s 00:11:09.018 06:50:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:09.018 06:50:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:09.018 06:50:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:09.018 06:50:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:09.018 06:50:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:09.018 06:50:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:09.018 06:50:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:09.018 06:50:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:09.018 06:50:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:09.018 06:50:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:09.018 06:50:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:657f0c9c-3891-4064-9841-3d87a573b6e7 00:11:09.018 06:50:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=657f0c9c-3891-4064-9841-3d87a573b6e7 00:11:09.018 06:50:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:09.018 06:50:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:09.018 06:50:13 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:09.018 06:50:13 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:09.018 06:50:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:09.018 06:50:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:09.018 06:50:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:09.018 06:50:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.018 06:50:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.019 06:50:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.019 06:50:13 -- paths/export.sh@5 -- # export PATH 00:11:09.019 06:50:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.019 06:50:13 -- nvmf/common.sh@46 -- # : 0 00:11:09.019 06:50:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:09.019 06:50:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:09.019 06:50:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:09.019 06:50:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:09.019 06:50:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:09.019 06:50:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:09.019 06:50:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:09.019 06:50:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:09.019 06:50:13 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:09.019 06:50:13 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:09.019 06:50:13 -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:09.019 06:50:13 -- target/fio.sh@16 -- # nvmftestinit 00:11:09.019 06:50:13 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:09.019 06:50:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:09.019 06:50:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:09.019 06:50:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:09.019 06:50:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:09.019 06:50:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:09.019 06:50:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:09.019 06:50:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:09.019 06:50:13 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:09.019 06:50:13 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:09.019 06:50:13 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:09.019 06:50:13 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:09.019 06:50:13 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:09.019 06:50:13 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:09.019 06:50:13 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:09.019 06:50:13 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:09.019 06:50:13 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:09.019 06:50:13 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:09.019 06:50:13 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:09.019 06:50:13 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:09.019 06:50:13 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:09.019 06:50:13 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:09.019 06:50:13 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:09.019 06:50:13 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:09.019 06:50:13 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:09.019 06:50:13 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:09.019 06:50:13 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:09.019 06:50:13 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:09.019 Cannot find device "nvmf_tgt_br" 00:11:09.019 06:50:13 -- nvmf/common.sh@154 -- # true 00:11:09.019 06:50:13 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:09.019 Cannot find device "nvmf_tgt_br2" 00:11:09.019 06:50:13 -- nvmf/common.sh@155 -- # true 00:11:09.019 06:50:13 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:09.019 06:50:13 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:09.019 Cannot find device "nvmf_tgt_br" 00:11:09.019 06:50:13 -- nvmf/common.sh@157 -- # true 00:11:09.019 06:50:13 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:09.019 Cannot find device "nvmf_tgt_br2" 00:11:09.019 06:50:13 -- nvmf/common.sh@158 -- # true 00:11:09.019 06:50:13 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:09.019 06:50:13 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:09.019 06:50:13 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:09.019 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:09.019 06:50:13 -- nvmf/common.sh@161 -- # true 00:11:09.019 06:50:13 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:09.019 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:09.019 06:50:13 -- nvmf/common.sh@162 -- # true 00:11:09.019 06:50:13 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:09.019 06:50:13 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:09.019 06:50:13 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:09.019 06:50:13 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:09.019 06:50:13 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:09.019 06:50:13 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:09.019 06:50:13 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:09.019 06:50:13 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:09.019 06:50:13 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:09.278 06:50:13 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:09.278 06:50:13 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:09.278 06:50:13 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:09.278 06:50:13 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:09.278 06:50:13 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:09.278 06:50:13 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:09.278 06:50:13 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:09.278 06:50:13 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:09.278 06:50:13 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:09.278 06:50:13 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:09.278 06:50:13 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:09.278 06:50:13 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:09.278 06:50:13 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:09.278 06:50:13 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:09.278 06:50:13 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:09.278 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:09.278 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:11:09.278 00:11:09.278 --- 10.0.0.2 ping statistics --- 00:11:09.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:09.278 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:11:09.278 06:50:13 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:09.278 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:09.278 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:11:09.278 00:11:09.278 --- 10.0.0.3 ping statistics --- 00:11:09.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:09.278 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:11:09.278 06:50:13 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:09.278 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:09.278 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:11:09.278 00:11:09.278 --- 10.0.0.1 ping statistics --- 00:11:09.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:09.278 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:11:09.278 06:50:13 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:09.278 06:50:13 -- nvmf/common.sh@421 -- # return 0 00:11:09.278 06:50:13 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:09.278 06:50:13 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:09.278 06:50:13 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:09.278 06:50:13 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:09.278 06:50:13 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:09.278 06:50:13 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:09.278 06:50:13 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:09.278 06:50:13 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:09.278 06:50:13 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:09.278 06:50:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:09.278 06:50:13 -- common/autotest_common.sh@10 -- # set +x 00:11:09.278 06:50:13 -- nvmf/common.sh@469 -- # nvmfpid=75354 00:11:09.278 06:50:13 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:09.278 06:50:13 -- nvmf/common.sh@470 -- # waitforlisten 75354 00:11:09.278 06:50:13 -- common/autotest_common.sh@829 -- # '[' -z 75354 ']' 00:11:09.278 06:50:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:09.278 06:50:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:09.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:09.278 06:50:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:09.278 06:50:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:09.278 06:50:13 -- common/autotest_common.sh@10 -- # set +x 00:11:09.278 [2024-12-13 06:50:13.711510] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:09.278 [2024-12-13 06:50:13.711590] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:09.537 [2024-12-13 06:50:13.843815] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:09.537 [2024-12-13 06:50:13.875956] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:09.537 [2024-12-13 06:50:13.876110] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:09.537 [2024-12-13 06:50:13.876122] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:09.537 [2024-12-13 06:50:13.876130] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:09.537 [2024-12-13 06:50:13.876189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:09.537 [2024-12-13 06:50:13.876345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:09.537 [2024-12-13 06:50:13.877075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:09.537 [2024-12-13 06:50:13.877139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:10.473 06:50:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:10.473 06:50:14 -- common/autotest_common.sh@862 -- # return 0 00:11:10.473 06:50:14 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:10.473 06:50:14 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:10.473 06:50:14 -- common/autotest_common.sh@10 -- # set +x 00:11:10.473 06:50:14 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:10.473 06:50:14 -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:10.473 [2024-12-13 06:50:14.922118] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:10.473 06:50:14 -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:11.045 06:50:15 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:11.045 06:50:15 -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:11.045 06:50:15 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:11.045 06:50:15 -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:11.304 06:50:15 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:11.304 06:50:15 -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:11.564 06:50:16 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:11.564 06:50:16 -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:12.132 06:50:16 -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:12.132 06:50:16 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:12.132 06:50:16 -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:12.390 06:50:16 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:12.390 06:50:16 -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:12.649 06:50:17 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:12.649 06:50:17 -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:12.952 06:50:17 -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:13.211 06:50:17 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:13.211 06:50:17 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:13.470 06:50:17 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:13.470 06:50:17 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:13.728 06:50:18 -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:13.728 [2024-12-13 06:50:18.242088] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:13.987 06:50:18 -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:13.987 06:50:18 -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:14.246 06:50:18 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:657f0c9c-3891-4064-9841-3d87a573b6e7 --hostid=657f0c9c-3891-4064-9841-3d87a573b6e7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:14.505 06:50:18 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:14.505 06:50:18 -- common/autotest_common.sh@1187 -- # local i=0 00:11:14.505 06:50:18 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:11:14.505 06:50:18 -- common/autotest_common.sh@1189 -- # [[ -n 4 ]] 00:11:14.505 06:50:18 -- common/autotest_common.sh@1190 -- # nvme_device_counter=4 00:11:14.505 06:50:18 -- common/autotest_common.sh@1194 -- # sleep 2 00:11:16.413 06:50:20 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:11:16.413 06:50:20 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:11:16.413 06:50:20 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:11:16.413 06:50:20 -- common/autotest_common.sh@1196 -- # nvme_devices=4 00:11:16.413 06:50:20 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:11:16.413 06:50:20 -- common/autotest_common.sh@1197 -- # return 0 00:11:16.413 06:50:20 -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:16.413 [global] 00:11:16.413 thread=1 00:11:16.413 invalidate=1 00:11:16.413 rw=write 00:11:16.413 time_based=1 00:11:16.413 runtime=1 00:11:16.413 ioengine=libaio 00:11:16.413 direct=1 00:11:16.413 bs=4096 00:11:16.413 iodepth=1 00:11:16.413 norandommap=0 00:11:16.413 numjobs=1 00:11:16.413 00:11:16.413 verify_dump=1 00:11:16.413 verify_backlog=512 00:11:16.413 verify_state_save=0 00:11:16.413 do_verify=1 00:11:16.413 verify=crc32c-intel 00:11:16.413 [job0] 00:11:16.413 filename=/dev/nvme0n1 00:11:16.413 [job1] 00:11:16.413 filename=/dev/nvme0n2 00:11:16.413 [job2] 00:11:16.413 filename=/dev/nvme0n3 00:11:16.413 [job3] 00:11:16.413 filename=/dev/nvme0n4 00:11:16.672 Could not set queue depth (nvme0n1) 00:11:16.672 Could not set queue depth (nvme0n2) 00:11:16.672 Could not set queue depth (nvme0n3) 00:11:16.672 Could not set queue depth (nvme0n4) 00:11:16.672 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:16.672 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:16.672 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:16.672 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:16.672 fio-3.35 00:11:16.672 Starting 4 threads 00:11:18.047 00:11:18.047 job0: (groupid=0, jobs=1): err= 0: pid=75533: Fri Dec 13 06:50:22 2024 00:11:18.047 read: IOPS=2930, BW=11.4MiB/s (12.0MB/s)(11.5MiB/1001msec) 00:11:18.047 slat (nsec): min=11419, max=43265, avg=14654.50, stdev=3812.22 00:11:18.047 clat (usec): min=122, max=281, avg=165.45, stdev=17.34 00:11:18.047 lat (usec): min=135, max=294, avg=180.10, stdev=17.97 00:11:18.047 clat percentiles (usec): 00:11:18.047 | 1.00th=[ 133], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 151], 00:11:18.047 | 30.00th=[ 157], 40.00th=[ 159], 50.00th=[ 163], 60.00th=[ 167], 00:11:18.047 | 70.00th=[ 174], 80.00th=[ 180], 90.00th=[ 188], 95.00th=[ 196], 00:11:18.047 | 99.00th=[ 212], 99.50th=[ 219], 99.90th=[ 249], 99.95th=[ 249], 00:11:18.047 | 99.99th=[ 281] 00:11:18.047 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:11:18.047 slat (usec): min=14, max=117, avg=22.07, stdev= 6.44 00:11:18.047 clat (usec): min=88, max=2631, avg=128.19, stdev=50.14 00:11:18.047 lat (usec): min=107, max=2650, avg=150.26, stdev=50.95 00:11:18.047 clat percentiles (usec): 00:11:18.047 | 1.00th=[ 101], 5.00th=[ 106], 10.00th=[ 111], 20.00th=[ 115], 00:11:18.047 | 30.00th=[ 118], 40.00th=[ 122], 50.00th=[ 125], 60.00th=[ 128], 00:11:18.047 | 70.00th=[ 133], 80.00th=[ 139], 90.00th=[ 149], 95.00th=[ 155], 00:11:18.047 | 99.00th=[ 180], 99.50th=[ 200], 99.90th=[ 367], 99.95th=[ 660], 00:11:18.047 | 99.99th=[ 2638] 00:11:18.047 bw ( KiB/s): min=12288, max=12288, per=30.03%, avg=12288.00, stdev= 0.00, samples=1 00:11:18.047 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:18.047 lat (usec) : 100=0.48%, 250=99.33%, 500=0.13%, 750=0.03% 00:11:18.047 lat (msec) : 4=0.02% 00:11:18.047 cpu : usr=2.40%, sys=8.70%, ctx=6006, majf=0, minf=9 00:11:18.047 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:18.047 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.047 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.047 issued rwts: total=2933,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:18.047 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:18.047 job1: (groupid=0, jobs=1): err= 0: pid=75534: Fri Dec 13 06:50:22 2024 00:11:18.047 read: IOPS=1909, BW=7636KiB/s (7820kB/s)(7644KiB/1001msec) 00:11:18.047 slat (nsec): min=11549, max=62925, avg=17017.33, stdev=5771.51 00:11:18.047 clat (usec): min=142, max=720, avg=282.50, stdev=79.86 00:11:18.047 lat (usec): min=164, max=748, avg=299.52, stdev=81.57 00:11:18.047 clat percentiles (usec): 00:11:18.047 | 1.00th=[ 180], 5.00th=[ 221], 10.00th=[ 229], 20.00th=[ 237], 00:11:18.047 | 30.00th=[ 243], 40.00th=[ 251], 50.00th=[ 258], 60.00th=[ 265], 00:11:18.047 | 70.00th=[ 273], 80.00th=[ 293], 90.00th=[ 429], 95.00th=[ 498], 00:11:18.047 | 99.00th=[ 545], 99.50th=[ 562], 99.90th=[ 586], 99.95th=[ 717], 00:11:18.047 | 99.99th=[ 717] 00:11:18.047 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:11:18.047 slat (nsec): min=14747, max=60559, avg=23392.83, stdev=5384.06 00:11:18.047 clat (usec): min=90, max=3324, avg=181.45, stdev=80.11 00:11:18.047 lat (usec): min=109, max=3367, avg=204.85, stdev=80.60 00:11:18.047 clat percentiles (usec): 00:11:18.047 | 1.00th=[ 103], 5.00th=[ 113], 10.00th=[ 123], 20.00th=[ 135], 00:11:18.047 | 30.00th=[ 157], 40.00th=[ 178], 50.00th=[ 188], 60.00th=[ 196], 00:11:18.047 | 70.00th=[ 204], 80.00th=[ 215], 90.00th=[ 229], 95.00th=[ 241], 00:11:18.047 | 99.00th=[ 260], 99.50th=[ 269], 99.90th=[ 281], 99.95th=[ 285], 00:11:18.047 | 99.99th=[ 3326] 00:11:18.047 bw ( KiB/s): min= 8192, max= 8192, per=20.02%, avg=8192.00, stdev= 0.00, samples=1 00:11:18.047 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:18.047 lat (usec) : 100=0.30%, 250=68.65%, 500=28.67%, 750=2.35% 00:11:18.047 lat (msec) : 4=0.03% 00:11:18.047 cpu : usr=2.20%, sys=6.20%, ctx=3960, majf=0, minf=9 00:11:18.047 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:18.047 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.047 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.047 issued rwts: total=1911,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:18.047 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:18.047 job2: (groupid=0, jobs=1): err= 0: pid=75535: Fri Dec 13 06:50:22 2024 00:11:18.047 read: IOPS=1657, BW=6629KiB/s (6788kB/s)(6636KiB/1001msec) 00:11:18.047 slat (nsec): min=14864, max=58595, avg=18897.86, stdev=5022.80 00:11:18.047 clat (usec): min=168, max=544, avg=285.00, stdev=61.38 00:11:18.047 lat (usec): min=185, max=566, avg=303.90, stdev=63.92 00:11:18.047 clat percentiles (usec): 00:11:18.047 | 1.00th=[ 210], 5.00th=[ 227], 10.00th=[ 233], 20.00th=[ 241], 00:11:18.047 | 30.00th=[ 249], 40.00th=[ 255], 50.00th=[ 265], 60.00th=[ 273], 00:11:18.047 | 70.00th=[ 285], 80.00th=[ 334], 90.00th=[ 379], 95.00th=[ 420], 00:11:18.047 | 99.00th=[ 482], 99.50th=[ 510], 99.90th=[ 545], 99.95th=[ 545], 00:11:18.047 | 99.99th=[ 545] 00:11:18.047 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:11:18.047 slat (usec): min=18, max=135, avg=30.87, stdev=10.13 00:11:18.047 clat (usec): min=100, max=2009, avg=207.28, stdev=72.57 00:11:18.047 lat (usec): min=124, max=2031, avg=238.15, stdev=76.73 00:11:18.047 clat percentiles (usec): 00:11:18.047 | 1.00th=[ 111], 5.00th=[ 121], 10.00th=[ 128], 20.00th=[ 149], 00:11:18.047 | 30.00th=[ 180], 40.00th=[ 192], 50.00th=[ 204], 60.00th=[ 212], 00:11:18.047 | 70.00th=[ 223], 80.00th=[ 243], 90.00th=[ 310], 95.00th=[ 330], 00:11:18.047 | 99.00th=[ 363], 99.50th=[ 375], 99.90th=[ 400], 99.95th=[ 408], 00:11:18.047 | 99.99th=[ 2008] 00:11:18.047 bw ( KiB/s): min= 8192, max= 8192, per=20.02%, avg=8192.00, stdev= 0.00, samples=1 00:11:18.047 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:18.047 lat (usec) : 250=59.89%, 500=39.82%, 750=0.27% 00:11:18.047 lat (msec) : 4=0.03% 00:11:18.047 cpu : usr=2.20%, sys=7.10%, ctx=3707, majf=0, minf=7 00:11:18.047 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:18.047 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.047 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.047 issued rwts: total=1659,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:18.047 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:18.047 job3: (groupid=0, jobs=1): err= 0: pid=75536: Fri Dec 13 06:50:22 2024 00:11:18.047 read: IOPS=2647, BW=10.3MiB/s (10.8MB/s)(10.4MiB/1001msec) 00:11:18.047 slat (nsec): min=12435, max=47058, avg=15758.80, stdev=3780.54 00:11:18.047 clat (usec): min=132, max=337, avg=176.76, stdev=22.18 00:11:18.047 lat (usec): min=145, max=351, avg=192.52, stdev=22.34 00:11:18.047 clat percentiles (usec): 00:11:18.047 | 1.00th=[ 143], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 159], 00:11:18.047 | 30.00th=[ 165], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 178], 00:11:18.047 | 70.00th=[ 184], 80.00th=[ 192], 90.00th=[ 202], 95.00th=[ 217], 00:11:18.048 | 99.00th=[ 260], 99.50th=[ 269], 99.90th=[ 314], 99.95th=[ 330], 00:11:18.048 | 99.99th=[ 338] 00:11:18.048 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:11:18.048 slat (usec): min=14, max=119, avg=23.37, stdev= 6.47 00:11:18.048 clat (usec): min=97, max=462, avg=132.76, stdev=18.28 00:11:18.048 lat (usec): min=116, max=483, avg=156.13, stdev=19.27 00:11:18.048 clat percentiles (usec): 00:11:18.048 | 1.00th=[ 105], 5.00th=[ 111], 10.00th=[ 114], 20.00th=[ 119], 00:11:18.048 | 30.00th=[ 123], 40.00th=[ 127], 50.00th=[ 130], 60.00th=[ 135], 00:11:18.048 | 70.00th=[ 139], 80.00th=[ 145], 90.00th=[ 155], 95.00th=[ 165], 00:11:18.048 | 99.00th=[ 188], 99.50th=[ 200], 99.90th=[ 227], 99.95th=[ 245], 00:11:18.048 | 99.99th=[ 465] 00:11:18.048 bw ( KiB/s): min=12288, max=12288, per=30.03%, avg=12288.00, stdev= 0.00, samples=1 00:11:18.048 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:18.048 lat (usec) : 100=0.05%, 250=99.21%, 500=0.73% 00:11:18.048 cpu : usr=2.20%, sys=9.00%, ctx=5725, majf=0, minf=11 00:11:18.048 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:18.048 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.048 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.048 issued rwts: total=2650,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:18.048 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:18.048 00:11:18.048 Run status group 0 (all jobs): 00:11:18.048 READ: bw=35.7MiB/s (37.5MB/s), 6629KiB/s-11.4MiB/s (6788kB/s-12.0MB/s), io=35.8MiB (37.5MB), run=1001-1001msec 00:11:18.048 WRITE: bw=40.0MiB/s (41.9MB/s), 8184KiB/s-12.0MiB/s (8380kB/s-12.6MB/s), io=40.0MiB (41.9MB), run=1001-1001msec 00:11:18.048 00:11:18.048 Disk stats (read/write): 00:11:18.048 nvme0n1: ios=2610/2590, merge=0/0, ticks=459/361, in_queue=820, util=87.26% 00:11:18.048 nvme0n2: ios=1577/1968, merge=0/0, ticks=458/365, in_queue=823, util=88.41% 00:11:18.048 nvme0n3: ios=1536/1583, merge=0/0, ticks=449/356, in_queue=805, util=89.21% 00:11:18.048 nvme0n4: ios=2361/2560, merge=0/0, ticks=420/363, in_queue=783, util=89.77% 00:11:18.048 06:50:22 -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:18.048 [global] 00:11:18.048 thread=1 00:11:18.048 invalidate=1 00:11:18.048 rw=randwrite 00:11:18.048 time_based=1 00:11:18.048 runtime=1 00:11:18.048 ioengine=libaio 00:11:18.048 direct=1 00:11:18.048 bs=4096 00:11:18.048 iodepth=1 00:11:18.048 norandommap=0 00:11:18.048 numjobs=1 00:11:18.048 00:11:18.048 verify_dump=1 00:11:18.048 verify_backlog=512 00:11:18.048 verify_state_save=0 00:11:18.048 do_verify=1 00:11:18.048 verify=crc32c-intel 00:11:18.048 [job0] 00:11:18.048 filename=/dev/nvme0n1 00:11:18.048 [job1] 00:11:18.048 filename=/dev/nvme0n2 00:11:18.048 [job2] 00:11:18.048 filename=/dev/nvme0n3 00:11:18.048 [job3] 00:11:18.048 filename=/dev/nvme0n4 00:11:18.048 Could not set queue depth (nvme0n1) 00:11:18.048 Could not set queue depth (nvme0n2) 00:11:18.048 Could not set queue depth (nvme0n3) 00:11:18.048 Could not set queue depth (nvme0n4) 00:11:18.048 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:18.048 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:18.048 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:18.048 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:18.048 fio-3.35 00:11:18.048 Starting 4 threads 00:11:19.425 00:11:19.425 job0: (groupid=0, jobs=1): err= 0: pid=75594: Fri Dec 13 06:50:23 2024 00:11:19.425 read: IOPS=2633, BW=10.3MiB/s (10.8MB/s)(10.3MiB/1001msec) 00:11:19.425 slat (nsec): min=9839, max=70306, avg=15375.16, stdev=5431.02 00:11:19.425 clat (usec): min=121, max=531, avg=168.81, stdev=30.70 00:11:19.425 lat (usec): min=134, max=544, avg=184.18, stdev=31.10 00:11:19.425 clat percentiles (usec): 00:11:19.425 | 1.00th=[ 133], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 149], 00:11:19.425 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 161], 60.00th=[ 167], 00:11:19.425 | 70.00th=[ 176], 80.00th=[ 184], 90.00th=[ 200], 95.00th=[ 223], 00:11:19.425 | 99.00th=[ 273], 99.50th=[ 302], 99.90th=[ 465], 99.95th=[ 506], 00:11:19.425 | 99.99th=[ 529] 00:11:19.425 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:11:19.425 slat (usec): min=15, max=692, avg=22.62, stdev=18.93 00:11:19.425 clat (usec): min=3, max=2532, avg=141.24, stdev=74.91 00:11:19.425 lat (usec): min=107, max=2552, avg=163.86, stdev=77.12 00:11:19.425 clat percentiles (usec): 00:11:19.425 | 1.00th=[ 95], 5.00th=[ 101], 10.00th=[ 105], 20.00th=[ 111], 00:11:19.425 | 30.00th=[ 115], 40.00th=[ 119], 50.00th=[ 124], 60.00th=[ 131], 00:11:19.425 | 70.00th=[ 141], 80.00th=[ 159], 90.00th=[ 204], 95.00th=[ 227], 00:11:19.425 | 99.00th=[ 289], 99.50th=[ 383], 99.90th=[ 1045], 99.95th=[ 1549], 00:11:19.425 | 99.99th=[ 2540] 00:11:19.425 bw ( KiB/s): min=12288, max=12288, per=30.97%, avg=12288.00, stdev= 0.00, samples=1 00:11:19.425 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:19.425 lat (usec) : 4=0.04%, 20=0.02%, 100=2.26%, 250=95.27%, 500=2.19% 00:11:19.425 lat (usec) : 750=0.14%, 1000=0.02% 00:11:19.425 lat (msec) : 2=0.05%, 4=0.02% 00:11:19.425 cpu : usr=2.00%, sys=9.10%, ctx=5726, majf=0, minf=3 00:11:19.425 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:19.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.425 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.425 issued rwts: total=2636,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:19.425 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:19.425 job1: (groupid=0, jobs=1): err= 0: pid=75595: Fri Dec 13 06:50:23 2024 00:11:19.425 read: IOPS=2004, BW=8020KiB/s (8212kB/s)(8028KiB/1001msec) 00:11:19.425 slat (nsec): min=7500, max=67484, avg=14204.37, stdev=5247.28 00:11:19.425 clat (usec): min=194, max=398, avg=250.23, stdev=25.48 00:11:19.425 lat (usec): min=204, max=414, avg=264.44, stdev=25.82 00:11:19.425 clat percentiles (usec): 00:11:19.425 | 1.00th=[ 202], 5.00th=[ 215], 10.00th=[ 221], 20.00th=[ 229], 00:11:19.425 | 30.00th=[ 235], 40.00th=[ 241], 50.00th=[ 249], 60.00th=[ 255], 00:11:19.425 | 70.00th=[ 262], 80.00th=[ 273], 90.00th=[ 285], 95.00th=[ 297], 00:11:19.425 | 99.00th=[ 322], 99.50th=[ 330], 99.90th=[ 343], 99.95th=[ 343], 00:11:19.425 | 99.99th=[ 400] 00:11:19.425 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:11:19.425 slat (usec): min=9, max=149, avg=20.74, stdev= 7.90 00:11:19.425 clat (usec): min=127, max=505, avg=204.99, stdev=23.56 00:11:19.425 lat (usec): min=166, max=521, avg=225.73, stdev=25.05 00:11:19.425 clat percentiles (usec): 00:11:19.425 | 1.00th=[ 163], 5.00th=[ 172], 10.00th=[ 178], 20.00th=[ 186], 00:11:19.425 | 30.00th=[ 192], 40.00th=[ 198], 50.00th=[ 202], 60.00th=[ 208], 00:11:19.425 | 70.00th=[ 217], 80.00th=[ 225], 90.00th=[ 235], 95.00th=[ 245], 00:11:19.425 | 99.00th=[ 269], 99.50th=[ 273], 99.90th=[ 306], 99.95th=[ 330], 00:11:19.425 | 99.99th=[ 506] 00:11:19.425 bw ( KiB/s): min= 8192, max= 8192, per=20.64%, avg=8192.00, stdev= 0.00, samples=1 00:11:19.425 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:19.425 lat (usec) : 250=75.17%, 500=24.81%, 750=0.02% 00:11:19.425 cpu : usr=2.10%, sys=5.70%, ctx=4055, majf=0, minf=11 00:11:19.425 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:19.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.425 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.425 issued rwts: total=2007,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:19.425 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:19.425 job2: (groupid=0, jobs=1): err= 0: pid=75596: Fri Dec 13 06:50:23 2024 00:11:19.425 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:11:19.425 slat (usec): min=11, max=148, avg=15.87, stdev= 6.08 00:11:19.425 clat (usec): min=43, max=1451, avg=188.17, stdev=39.07 00:11:19.425 lat (usec): min=149, max=1466, avg=204.03, stdev=39.43 00:11:19.425 clat percentiles (usec): 00:11:19.425 | 1.00th=[ 143], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 165], 00:11:19.425 | 30.00th=[ 169], 40.00th=[ 176], 50.00th=[ 182], 60.00th=[ 190], 00:11:19.425 | 70.00th=[ 198], 80.00th=[ 208], 90.00th=[ 227], 95.00th=[ 247], 00:11:19.425 | 99.00th=[ 285], 99.50th=[ 306], 99.90th=[ 408], 99.95th=[ 429], 00:11:19.425 | 99.99th=[ 1450] 00:11:19.425 write: IOPS=2759, BW=10.8MiB/s (11.3MB/s)(10.8MiB/1001msec); 0 zone resets 00:11:19.425 slat (nsec): min=13044, max=78164, avg=22957.57, stdev=6717.72 00:11:19.425 clat (usec): min=96, max=7629, avg=146.34, stdev=167.25 00:11:19.425 lat (usec): min=116, max=7648, avg=169.30, stdev=167.20 00:11:19.425 clat percentiles (usec): 00:11:19.425 | 1.00th=[ 106], 5.00th=[ 113], 10.00th=[ 117], 20.00th=[ 122], 00:11:19.425 | 30.00th=[ 126], 40.00th=[ 130], 50.00th=[ 135], 60.00th=[ 139], 00:11:19.425 | 70.00th=[ 147], 80.00th=[ 155], 90.00th=[ 174], 95.00th=[ 198], 00:11:19.425 | 99.00th=[ 245], 99.50th=[ 265], 99.90th=[ 2769], 99.95th=[ 3392], 00:11:19.425 | 99.99th=[ 7635] 00:11:19.425 bw ( KiB/s): min=12288, max=12288, per=30.97%, avg=12288.00, stdev= 0.00, samples=1 00:11:19.425 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:19.425 lat (usec) : 50=0.02%, 100=0.04%, 250=97.50%, 500=2.31%, 750=0.02% 00:11:19.425 lat (usec) : 1000=0.02% 00:11:19.425 lat (msec) : 2=0.04%, 4=0.04%, 10=0.02% 00:11:19.425 cpu : usr=1.70%, sys=8.70%, ctx=5329, majf=0, minf=22 00:11:19.425 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:19.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.425 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.425 issued rwts: total=2560,2762,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:19.425 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:19.425 job3: (groupid=0, jobs=1): err= 0: pid=75597: Fri Dec 13 06:50:23 2024 00:11:19.425 read: IOPS=2005, BW=8024KiB/s (8217kB/s)(8032KiB/1001msec) 00:11:19.425 slat (nsec): min=7743, max=77696, avg=15168.65, stdev=7641.38 00:11:19.425 clat (usec): min=156, max=346, avg=249.04, stdev=23.96 00:11:19.425 lat (usec): min=169, max=355, avg=264.21, stdev=25.25 00:11:19.425 clat percentiles (usec): 00:11:19.425 | 1.00th=[ 204], 5.00th=[ 215], 10.00th=[ 221], 20.00th=[ 229], 00:11:19.425 | 30.00th=[ 235], 40.00th=[ 241], 50.00th=[ 247], 60.00th=[ 253], 00:11:19.425 | 70.00th=[ 260], 80.00th=[ 269], 90.00th=[ 281], 95.00th=[ 293], 00:11:19.425 | 99.00th=[ 314], 99.50th=[ 322], 99.90th=[ 334], 99.95th=[ 338], 00:11:19.425 | 99.99th=[ 347] 00:11:19.425 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:11:19.426 slat (nsec): min=9966, max=89834, avg=18942.02, stdev=8101.68 00:11:19.426 clat (usec): min=140, max=586, avg=207.02, stdev=24.57 00:11:19.426 lat (usec): min=165, max=609, avg=225.97, stdev=25.30 00:11:19.426 clat percentiles (usec): 00:11:19.426 | 1.00th=[ 159], 5.00th=[ 174], 10.00th=[ 180], 20.00th=[ 188], 00:11:19.426 | 30.00th=[ 194], 40.00th=[ 198], 50.00th=[ 204], 60.00th=[ 212], 00:11:19.426 | 70.00th=[ 219], 80.00th=[ 227], 90.00th=[ 239], 95.00th=[ 247], 00:11:19.426 | 99.00th=[ 269], 99.50th=[ 277], 99.90th=[ 281], 99.95th=[ 289], 00:11:19.426 | 99.99th=[ 586] 00:11:19.426 bw ( KiB/s): min= 8208, max= 8208, per=20.69%, avg=8208.00, stdev= 0.00, samples=1 00:11:19.426 iops : min= 2052, max= 2052, avg=2052.00, stdev= 0.00, samples=1 00:11:19.426 lat (usec) : 250=75.67%, 500=24.31%, 750=0.02% 00:11:19.426 cpu : usr=1.90%, sys=5.70%, ctx=4056, majf=0, minf=11 00:11:19.426 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:19.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.426 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.426 issued rwts: total=2008,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:19.426 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:19.426 00:11:19.426 Run status group 0 (all jobs): 00:11:19.426 READ: bw=35.9MiB/s (37.7MB/s), 8020KiB/s-10.3MiB/s (8212kB/s-10.8MB/s), io=36.0MiB (37.7MB), run=1001-1001msec 00:11:19.426 WRITE: bw=38.8MiB/s (40.6MB/s), 8184KiB/s-12.0MiB/s (8380kB/s-12.6MB/s), io=38.8MiB (40.7MB), run=1001-1001msec 00:11:19.426 00:11:19.426 Disk stats (read/write): 00:11:19.426 nvme0n1: ios=2610/2601, merge=0/0, ticks=469/362, in_queue=831, util=88.98% 00:11:19.426 nvme0n2: ios=1586/2048, merge=0/0, ticks=408/409, in_queue=817, util=89.71% 00:11:19.426 nvme0n3: ios=2235/2560, merge=0/0, ticks=453/365, in_queue=818, util=89.05% 00:11:19.426 nvme0n4: ios=1537/2048, merge=0/0, ticks=369/392, in_queue=761, util=89.80% 00:11:19.426 06:50:23 -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:19.426 [global] 00:11:19.426 thread=1 00:11:19.426 invalidate=1 00:11:19.426 rw=write 00:11:19.426 time_based=1 00:11:19.426 runtime=1 00:11:19.426 ioengine=libaio 00:11:19.426 direct=1 00:11:19.426 bs=4096 00:11:19.426 iodepth=128 00:11:19.426 norandommap=0 00:11:19.426 numjobs=1 00:11:19.426 00:11:19.426 verify_dump=1 00:11:19.426 verify_backlog=512 00:11:19.426 verify_state_save=0 00:11:19.426 do_verify=1 00:11:19.426 verify=crc32c-intel 00:11:19.426 [job0] 00:11:19.426 filename=/dev/nvme0n1 00:11:19.426 [job1] 00:11:19.426 filename=/dev/nvme0n2 00:11:19.426 [job2] 00:11:19.426 filename=/dev/nvme0n3 00:11:19.426 [job3] 00:11:19.426 filename=/dev/nvme0n4 00:11:19.426 Could not set queue depth (nvme0n1) 00:11:19.426 Could not set queue depth (nvme0n2) 00:11:19.426 Could not set queue depth (nvme0n3) 00:11:19.426 Could not set queue depth (nvme0n4) 00:11:19.426 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:19.426 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:19.426 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:19.426 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:19.426 fio-3.35 00:11:19.426 Starting 4 threads 00:11:20.808 00:11:20.808 job0: (groupid=0, jobs=1): err= 0: pid=75657: Fri Dec 13 06:50:24 2024 00:11:20.808 read: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec) 00:11:20.808 slat (usec): min=8, max=2696, avg=81.01, stdev=379.04 00:11:20.808 clat (usec): min=7931, max=12089, avg=10928.96, stdev=531.36 00:11:20.808 lat (usec): min=9848, max=12102, avg=11009.97, stdev=377.78 00:11:20.808 clat percentiles (usec): 00:11:20.808 | 1.00th=[ 8717], 5.00th=[10159], 10.00th=[10421], 20.00th=[10683], 00:11:20.808 | 30.00th=[10814], 40.00th=[10945], 50.00th=[10945], 60.00th=[11076], 00:11:20.808 | 70.00th=[11076], 80.00th=[11207], 90.00th=[11469], 95.00th=[11731], 00:11:20.808 | 99.00th=[11994], 99.50th=[11994], 99.90th=[12125], 99.95th=[12125], 00:11:20.808 | 99.99th=[12125] 00:11:20.808 write: IOPS=5941, BW=23.2MiB/s (24.3MB/s)(23.3MiB/1002msec); 0 zone resets 00:11:20.808 slat (usec): min=10, max=2584, avg=84.14, stdev=354.14 00:11:20.808 clat (usec): min=172, max=12326, avg=10940.27, stdev=986.92 00:11:20.808 lat (usec): min=2136, max=12344, avg=11024.41, stdev=922.02 00:11:20.808 clat percentiles (usec): 00:11:20.808 | 1.00th=[ 5800], 5.00th=[ 9634], 10.00th=[10159], 20.00th=[10683], 00:11:20.808 | 30.00th=[10814], 40.00th=[10945], 50.00th=[11076], 60.00th=[11207], 00:11:20.808 | 70.00th=[11338], 80.00th=[11469], 90.00th=[11600], 95.00th=[11863], 00:11:20.808 | 99.00th=[12125], 99.50th=[12256], 99.90th=[12256], 99.95th=[12387], 00:11:20.808 | 99.99th=[12387] 00:11:20.808 bw ( KiB/s): min=24526, max=24526, per=36.42%, avg=24526.00, stdev= 0.00, samples=1 00:11:20.808 iops : min= 6131, max= 6131, avg=6131.00, stdev= 0.00, samples=1 00:11:20.808 lat (usec) : 250=0.01% 00:11:20.808 lat (msec) : 4=0.28%, 10=6.05%, 20=93.66% 00:11:20.808 cpu : usr=4.90%, sys=14.89%, ctx=368, majf=0, minf=1 00:11:20.808 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:11:20.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.808 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:20.808 issued rwts: total=5632,5953,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:20.808 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:20.808 job1: (groupid=0, jobs=1): err= 0: pid=75658: Fri Dec 13 06:50:24 2024 00:11:20.808 read: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec) 00:11:20.808 slat (usec): min=4, max=9401, avg=188.12, stdev=794.89 00:11:20.808 clat (usec): min=15857, max=34946, avg=24171.64, stdev=3391.63 00:11:20.809 lat (usec): min=15885, max=35305, avg=24359.76, stdev=3424.00 00:11:20.809 clat percentiles (usec): 00:11:20.809 | 1.00th=[17433], 5.00th=[18482], 10.00th=[19792], 20.00th=[21365], 00:11:20.809 | 30.00th=[22676], 40.00th=[23200], 50.00th=[23725], 60.00th=[24511], 00:11:20.809 | 70.00th=[26084], 80.00th=[27395], 90.00th=[28967], 95.00th=[30278], 00:11:20.809 | 99.00th=[31589], 99.50th=[32375], 99.90th=[33817], 99.95th=[34341], 00:11:20.809 | 99.99th=[34866] 00:11:20.809 write: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1003msec); 0 zone resets 00:11:20.809 slat (usec): min=10, max=7150, avg=161.58, stdev=735.69 00:11:20.809 clat (usec): min=2659, max=33090, avg=21061.12, stdev=4588.56 00:11:20.809 lat (usec): min=2689, max=33110, avg=21222.70, stdev=4610.91 00:11:20.809 clat percentiles (usec): 00:11:20.809 | 1.00th=[ 8717], 5.00th=[14091], 10.00th=[16319], 20.00th=[17433], 00:11:20.809 | 30.00th=[18482], 40.00th=[19792], 50.00th=[21103], 60.00th=[22938], 00:11:20.809 | 70.00th=[23725], 80.00th=[25035], 90.00th=[26346], 95.00th=[28443], 00:11:20.809 | 99.00th=[30540], 99.50th=[31065], 99.90th=[31327], 99.95th=[32375], 00:11:20.809 | 99.99th=[33162] 00:11:20.809 bw ( KiB/s): min=11176, max=12288, per=17.42%, avg=11732.00, stdev=786.30, samples=2 00:11:20.809 iops : min= 2794, max= 3072, avg=2933.00, stdev=196.58, samples=2 00:11:20.809 lat (msec) : 4=0.53%, 10=0.75%, 20=27.49%, 50=71.23% 00:11:20.809 cpu : usr=3.19%, sys=7.39%, ctx=666, majf=0, minf=1 00:11:20.809 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:11:20.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.809 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:20.809 issued rwts: total=2560,3060,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:20.809 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:20.809 job2: (groupid=0, jobs=1): err= 0: pid=75659: Fri Dec 13 06:50:24 2024 00:11:20.809 read: IOPS=4695, BW=18.3MiB/s (19.2MB/s)(18.4MiB/1002msec) 00:11:20.809 slat (usec): min=8, max=3920, avg=96.73, stdev=456.95 00:11:20.809 clat (usec): min=232, max=16875, avg=12749.69, stdev=1544.94 00:11:20.809 lat (usec): min=2660, max=16890, avg=12846.42, stdev=1483.33 00:11:20.809 clat percentiles (usec): 00:11:20.809 | 1.00th=[ 5932], 5.00th=[11731], 10.00th=[11994], 20.00th=[12256], 00:11:20.809 | 30.00th=[12387], 40.00th=[12518], 50.00th=[12649], 60.00th=[12780], 00:11:20.809 | 70.00th=[12911], 80.00th=[13173], 90.00th=[13698], 95.00th=[15664], 00:11:20.809 | 99.00th=[16909], 99.50th=[16909], 99.90th=[16909], 99.95th=[16909], 00:11:20.809 | 99.99th=[16909] 00:11:20.809 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:11:20.809 slat (usec): min=10, max=3658, avg=98.60, stdev=425.08 00:11:20.809 clat (usec): min=9469, max=16777, avg=12984.03, stdev=1080.92 00:11:20.809 lat (usec): min=10445, max=16813, avg=13082.63, stdev=999.17 00:11:20.809 clat percentiles (usec): 00:11:20.809 | 1.00th=[10290], 5.00th=[11863], 10.00th=[12125], 20.00th=[12387], 00:11:20.809 | 30.00th=[12518], 40.00th=[12649], 50.00th=[12780], 60.00th=[13042], 00:11:20.809 | 70.00th=[13173], 80.00th=[13435], 90.00th=[13960], 95.00th=[15926], 00:11:20.809 | 99.00th=[16712], 99.50th=[16712], 99.90th=[16712], 99.95th=[16712], 00:11:20.809 | 99.99th=[16909] 00:11:20.809 bw ( KiB/s): min=20480, max=20480, per=30.41%, avg=20480.00, stdev= 0.00, samples=1 00:11:20.809 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:11:20.809 lat (usec) : 250=0.01% 00:11:20.809 lat (msec) : 4=0.33%, 10=1.38%, 20=98.28% 00:11:20.809 cpu : usr=4.50%, sys=14.09%, ctx=309, majf=0, minf=1 00:11:20.809 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:11:20.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.809 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:20.809 issued rwts: total=4705,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:20.809 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:20.809 job3: (groupid=0, jobs=1): err= 0: pid=75660: Fri Dec 13 06:50:24 2024 00:11:20.809 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:11:20.809 slat (usec): min=3, max=7507, avg=190.22, stdev=794.21 00:11:20.809 clat (usec): min=15447, max=34896, avg=24132.25, stdev=2864.40 00:11:20.809 lat (usec): min=15470, max=34923, avg=24322.47, stdev=2887.64 00:11:20.809 clat percentiles (usec): 00:11:20.809 | 1.00th=[18482], 5.00th=[20055], 10.00th=[20841], 20.00th=[21627], 00:11:20.809 | 30.00th=[22676], 40.00th=[23200], 50.00th=[23725], 60.00th=[24511], 00:11:20.809 | 70.00th=[25297], 80.00th=[26608], 90.00th=[27919], 95.00th=[29230], 00:11:20.809 | 99.00th=[32375], 99.50th=[33817], 99.90th=[33817], 99.95th=[33817], 00:11:20.809 | 99.99th=[34866] 00:11:20.809 write: IOPS=2750, BW=10.7MiB/s (11.3MB/s)(10.8MiB/1001msec); 0 zone resets 00:11:20.809 slat (usec): min=11, max=7013, avg=178.61, stdev=764.04 00:11:20.809 clat (usec): min=218, max=36468, avg=23221.40, stdev=4616.41 00:11:20.809 lat (usec): min=3554, max=36495, avg=23400.00, stdev=4614.91 00:11:20.809 clat percentiles (usec): 00:11:20.809 | 1.00th=[ 8455], 5.00th=[16581], 10.00th=[18744], 20.00th=[20317], 00:11:20.809 | 30.00th=[21103], 40.00th=[22676], 50.00th=[23200], 60.00th=[23987], 00:11:20.809 | 70.00th=[24773], 80.00th=[25822], 90.00th=[28705], 95.00th=[31327], 00:11:20.809 | 99.00th=[35914], 99.50th=[36439], 99.90th=[36439], 99.95th=[36439], 00:11:20.809 | 99.99th=[36439] 00:11:20.809 bw ( KiB/s): min=12088, max=12088, per=17.95%, avg=12088.00, stdev= 0.00, samples=1 00:11:20.809 iops : min= 3022, max= 3022, avg=3022.00, stdev= 0.00, samples=1 00:11:20.809 lat (usec) : 250=0.02% 00:11:20.809 lat (msec) : 4=0.36%, 10=0.47%, 20=10.82%, 50=88.33% 00:11:20.809 cpu : usr=2.50%, sys=7.40%, ctx=626, majf=0, minf=6 00:11:20.809 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:11:20.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.809 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:20.809 issued rwts: total=2560,2753,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:20.809 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:20.809 00:11:20.809 Run status group 0 (all jobs): 00:11:20.809 READ: bw=60.2MiB/s (63.1MB/s), 9.97MiB/s-22.0MiB/s (10.5MB/s-23.0MB/s), io=60.4MiB (63.3MB), run=1001-1003msec 00:11:20.809 WRITE: bw=65.8MiB/s (69.0MB/s), 10.7MiB/s-23.2MiB/s (11.3MB/s-24.3MB/s), io=66.0MiB (69.2MB), run=1001-1003msec 00:11:20.809 00:11:20.809 Disk stats (read/write): 00:11:20.809 nvme0n1: ios=4946/5120, merge=0/0, ticks=11862/11955, in_queue=23817, util=89.08% 00:11:20.809 nvme0n2: ios=2304/2560, merge=0/0, ticks=17359/16334, in_queue=33693, util=89.51% 00:11:20.809 nvme0n3: ios=4123/4544, merge=0/0, ticks=11151/12637, in_queue=23788, util=89.97% 00:11:20.809 nvme0n4: ios=2048/2539, merge=0/0, ticks=15606/18010, in_queue=33616, util=89.41% 00:11:20.809 06:50:24 -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:20.809 [global] 00:11:20.809 thread=1 00:11:20.809 invalidate=1 00:11:20.809 rw=randwrite 00:11:20.809 time_based=1 00:11:20.809 runtime=1 00:11:20.809 ioengine=libaio 00:11:20.809 direct=1 00:11:20.809 bs=4096 00:11:20.809 iodepth=128 00:11:20.809 norandommap=0 00:11:20.809 numjobs=1 00:11:20.809 00:11:20.809 verify_dump=1 00:11:20.809 verify_backlog=512 00:11:20.809 verify_state_save=0 00:11:20.809 do_verify=1 00:11:20.809 verify=crc32c-intel 00:11:20.809 [job0] 00:11:20.809 filename=/dev/nvme0n1 00:11:20.809 [job1] 00:11:20.809 filename=/dev/nvme0n2 00:11:20.809 [job2] 00:11:20.809 filename=/dev/nvme0n3 00:11:20.809 [job3] 00:11:20.809 filename=/dev/nvme0n4 00:11:20.809 Could not set queue depth (nvme0n1) 00:11:20.809 Could not set queue depth (nvme0n2) 00:11:20.809 Could not set queue depth (nvme0n3) 00:11:20.809 Could not set queue depth (nvme0n4) 00:11:20.809 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:20.809 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:20.809 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:20.809 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:20.809 fio-3.35 00:11:20.809 Starting 4 threads 00:11:22.189 00:11:22.189 job0: (groupid=0, jobs=1): err= 0: pid=75714: Fri Dec 13 06:50:26 2024 00:11:22.189 read: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec) 00:11:22.189 slat (usec): min=7, max=2959, avg=82.24, stdev=344.30 00:11:22.189 clat (usec): min=7952, max=14233, avg=10926.75, stdev=980.39 00:11:22.189 lat (usec): min=8051, max=15140, avg=11008.98, stdev=1000.41 00:11:22.189 clat percentiles (usec): 00:11:22.189 | 1.00th=[ 8717], 5.00th=[ 9241], 10.00th=[ 9503], 20.00th=[10028], 00:11:22.189 | 30.00th=[10552], 40.00th=[10814], 50.00th=[10945], 60.00th=[11076], 00:11:22.189 | 70.00th=[11338], 80.00th=[11731], 90.00th=[12256], 95.00th=[12649], 00:11:22.189 | 99.00th=[13042], 99.50th=[13042], 99.90th=[13566], 99.95th=[13698], 00:11:22.189 | 99.99th=[14222] 00:11:22.189 write: IOPS=6050, BW=23.6MiB/s (24.8MB/s)(23.7MiB/1003msec); 0 zone resets 00:11:22.189 slat (usec): min=10, max=3224, avg=81.53, stdev=385.55 00:11:22.189 clat (usec): min=143, max=14434, avg=10754.18, stdev=1013.67 00:11:22.189 lat (usec): min=2751, max=14480, avg=10835.71, stdev=1074.32 00:11:22.189 clat percentiles (usec): 00:11:22.189 | 1.00th=[ 6652], 5.00th=[ 9634], 10.00th=[10028], 20.00th=[10290], 00:11:22.189 | 30.00th=[10552], 40.00th=[10683], 50.00th=[10814], 60.00th=[10945], 00:11:22.189 | 70.00th=[11076], 80.00th=[11207], 90.00th=[11469], 95.00th=[12125], 00:11:22.189 | 99.00th=[13304], 99.50th=[13698], 99.90th=[14222], 99.95th=[14222], 00:11:22.189 | 99.99th=[14484] 00:11:22.189 bw ( KiB/s): min=22906, max=24576, per=35.30%, avg=23741.00, stdev=1180.87, samples=2 00:11:22.189 iops : min= 5726, max= 6144, avg=5935.00, stdev=295.57, samples=2 00:11:22.189 lat (usec) : 250=0.01% 00:11:22.189 lat (msec) : 4=0.36%, 10=13.26%, 20=86.37% 00:11:22.189 cpu : usr=5.19%, sys=14.97%, ctx=424, majf=0, minf=5 00:11:22.189 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:11:22.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.189 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:22.189 issued rwts: total=5632,6069,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:22.189 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:22.189 job1: (groupid=0, jobs=1): err= 0: pid=75715: Fri Dec 13 06:50:26 2024 00:11:22.189 read: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec) 00:11:22.189 slat (usec): min=3, max=7593, avg=163.61, stdev=651.06 00:11:22.189 clat (usec): min=8309, max=36748, avg=20778.60, stdev=7484.77 00:11:22.189 lat (usec): min=10350, max=36782, avg=20942.21, stdev=7528.68 00:11:22.189 clat percentiles (usec): 00:11:22.189 | 1.00th=[ 9241], 5.00th=[10552], 10.00th=[10945], 20.00th=[11207], 00:11:22.189 | 30.00th=[11469], 40.00th=[20055], 50.00th=[23725], 60.00th=[24773], 00:11:22.189 | 70.00th=[26346], 80.00th=[27395], 90.00th=[28967], 95.00th=[30540], 00:11:22.189 | 99.00th=[33162], 99.50th=[33424], 99.90th=[35914], 99.95th=[35914], 00:11:22.189 | 99.99th=[36963] 00:11:22.189 write: IOPS=3538, BW=13.8MiB/s (14.5MB/s)(13.9MiB/1006msec); 0 zone resets 00:11:22.189 slat (usec): min=4, max=5380, avg=132.63, stdev=515.01 00:11:22.189 clat (usec): min=4967, max=34445, avg=17765.40, stdev=5799.92 00:11:22.189 lat (usec): min=5594, max=34464, avg=17898.03, stdev=5821.62 00:11:22.189 clat percentiles (usec): 00:11:22.189 | 1.00th=[ 8979], 5.00th=[10552], 10.00th=[10814], 20.00th=[11076], 00:11:22.189 | 30.00th=[11600], 40.00th=[16909], 50.00th=[19006], 60.00th=[20055], 00:11:22.189 | 70.00th=[21103], 80.00th=[22152], 90.00th=[25822], 95.00th=[27395], 00:11:22.189 | 99.00th=[31065], 99.50th=[31589], 99.90th=[34341], 99.95th=[34341], 00:11:22.189 | 99.99th=[34341] 00:11:22.189 bw ( KiB/s): min=11057, max=16416, per=20.42%, avg=13736.50, stdev=3789.39, samples=2 00:11:22.189 iops : min= 2764, max= 4104, avg=3434.00, stdev=947.52, samples=2 00:11:22.189 lat (msec) : 10=1.40%, 20=48.43%, 50=50.17% 00:11:22.189 cpu : usr=3.28%, sys=8.76%, ctx=878, majf=0, minf=18 00:11:22.189 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:11:22.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.189 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:22.189 issued rwts: total=3072,3560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:22.189 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:22.189 job2: (groupid=0, jobs=1): err= 0: pid=75716: Fri Dec 13 06:50:26 2024 00:11:22.189 read: IOPS=4510, BW=17.6MiB/s (18.5MB/s)(17.7MiB/1004msec) 00:11:22.189 slat (usec): min=4, max=9347, avg=107.33, stdev=503.56 00:11:22.189 clat (usec): min=356, max=35639, avg=13948.86, stdev=4686.90 00:11:22.189 lat (usec): min=3076, max=36632, avg=14056.19, stdev=4720.87 00:11:22.189 clat percentiles (usec): 00:11:22.190 | 1.00th=[ 6652], 5.00th=[10290], 10.00th=[10683], 20.00th=[11469], 00:11:22.190 | 30.00th=[12125], 40.00th=[12387], 50.00th=[12649], 60.00th=[13042], 00:11:22.190 | 70.00th=[13566], 80.00th=[14353], 90.00th=[22414], 95.00th=[26346], 00:11:22.190 | 99.00th=[29754], 99.50th=[31327], 99.90th=[32375], 99.95th=[35390], 00:11:22.190 | 99.99th=[35390] 00:11:22.190 write: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec); 0 zone resets 00:11:22.190 slat (usec): min=9, max=6982, avg=104.09, stdev=486.13 00:11:22.190 clat (usec): min=9115, max=27025, avg=13708.63, stdev=2465.44 00:11:22.190 lat (usec): min=9136, max=27143, avg=13812.72, stdev=2517.62 00:11:22.190 clat percentiles (usec): 00:11:22.190 | 1.00th=[10683], 5.00th=[11600], 10.00th=[11731], 20.00th=[12125], 00:11:22.190 | 30.00th=[12387], 40.00th=[12518], 50.00th=[12780], 60.00th=[13173], 00:11:22.190 | 70.00th=[13829], 80.00th=[14746], 90.00th=[18220], 95.00th=[19268], 00:11:22.190 | 99.00th=[21365], 99.50th=[21890], 99.90th=[24773], 99.95th=[25035], 00:11:22.190 | 99.99th=[27132] 00:11:22.190 bw ( KiB/s): min=16416, max=20439, per=27.40%, avg=18427.50, stdev=2844.69, samples=2 00:11:22.190 iops : min= 4104, max= 5109, avg=4606.50, stdev=710.64, samples=2 00:11:22.190 lat (usec) : 500=0.01% 00:11:22.190 lat (msec) : 4=0.38%, 10=1.26%, 20=90.73%, 50=7.62% 00:11:22.190 cpu : usr=5.18%, sys=11.67%, ctx=499, majf=0, minf=7 00:11:22.190 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:11:22.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.190 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:22.190 issued rwts: total=4529,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:22.190 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:22.190 job3: (groupid=0, jobs=1): err= 0: pid=75717: Fri Dec 13 06:50:26 2024 00:11:22.190 read: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec) 00:11:22.190 slat (usec): min=3, max=7775, avg=197.92, stdev=782.67 00:11:22.190 clat (usec): min=17888, max=34400, avg=25338.83, stdev=2769.66 00:11:22.190 lat (usec): min=18819, max=34412, avg=25536.75, stdev=2742.27 00:11:22.190 clat percentiles (usec): 00:11:22.190 | 1.00th=[19530], 5.00th=[20841], 10.00th=[22152], 20.00th=[23200], 00:11:22.190 | 30.00th=[23725], 40.00th=[24773], 50.00th=[25297], 60.00th=[25822], 00:11:22.190 | 70.00th=[26346], 80.00th=[27395], 90.00th=[28967], 95.00th=[30540], 00:11:22.190 | 99.00th=[33817], 99.50th=[33817], 99.90th=[34341], 99.95th=[34341], 00:11:22.190 | 99.99th=[34341] 00:11:22.190 write: IOPS=2668, BW=10.4MiB/s (10.9MB/s)(10.5MiB/1004msec); 0 zone resets 00:11:22.190 slat (usec): min=7, max=6998, avg=175.91, stdev=630.36 00:11:22.190 clat (usec): min=3360, max=38986, avg=22805.15, stdev=4714.87 00:11:22.190 lat (usec): min=4047, max=39011, avg=22981.06, stdev=4722.83 00:11:22.190 clat percentiles (usec): 00:11:22.190 | 1.00th=[ 9634], 5.00th=[17433], 10.00th=[18482], 20.00th=[19792], 00:11:22.190 | 30.00th=[20841], 40.00th=[21627], 50.00th=[22414], 60.00th=[22938], 00:11:22.190 | 70.00th=[23725], 80.00th=[25035], 90.00th=[26608], 95.00th=[32900], 00:11:22.190 | 99.00th=[38536], 99.50th=[39060], 99.90th=[39060], 99.95th=[39060], 00:11:22.190 | 99.99th=[39060] 00:11:22.190 bw ( KiB/s): min= 9091, max=11352, per=15.20%, avg=10221.50, stdev=1598.77, samples=2 00:11:22.190 iops : min= 2272, max= 2838, avg=2555.00, stdev=400.22, samples=2 00:11:22.190 lat (msec) : 4=0.02%, 10=0.61%, 20=11.40%, 50=87.97% 00:11:22.190 cpu : usr=2.59%, sys=7.68%, ctx=924, majf=0, minf=23 00:11:22.190 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:11:22.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.190 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:22.190 issued rwts: total=2560,2679,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:22.190 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:22.190 00:11:22.190 Run status group 0 (all jobs): 00:11:22.190 READ: bw=61.3MiB/s (64.3MB/s), 9.96MiB/s-21.9MiB/s (10.4MB/s-23.0MB/s), io=61.7MiB (64.7MB), run=1003-1006msec 00:11:22.190 WRITE: bw=65.7MiB/s (68.9MB/s), 10.4MiB/s-23.6MiB/s (10.9MB/s-24.8MB/s), io=66.1MiB (69.3MB), run=1003-1006msec 00:11:22.190 00:11:22.190 Disk stats (read/write): 00:11:22.190 nvme0n1: ios=5020/5120, merge=0/0, ticks=16789/15104, in_queue=31893, util=88.57% 00:11:22.190 nvme0n2: ios=2837/3072, merge=0/0, ticks=13632/11337, in_queue=24969, util=89.17% 00:11:22.190 nvme0n3: ios=3657/4096, merge=0/0, ticks=16573/16016, in_queue=32589, util=89.29% 00:11:22.190 nvme0n4: ios=2048/2458, merge=0/0, ticks=13215/14140, in_queue=27355, util=89.22% 00:11:22.190 06:50:26 -- target/fio.sh@55 -- # sync 00:11:22.190 06:50:26 -- target/fio.sh@59 -- # fio_pid=75730 00:11:22.190 06:50:26 -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:22.190 06:50:26 -- target/fio.sh@61 -- # sleep 3 00:11:22.190 [global] 00:11:22.190 thread=1 00:11:22.190 invalidate=1 00:11:22.190 rw=read 00:11:22.190 time_based=1 00:11:22.190 runtime=10 00:11:22.190 ioengine=libaio 00:11:22.190 direct=1 00:11:22.190 bs=4096 00:11:22.190 iodepth=1 00:11:22.190 norandommap=1 00:11:22.190 numjobs=1 00:11:22.190 00:11:22.190 [job0] 00:11:22.190 filename=/dev/nvme0n1 00:11:22.190 [job1] 00:11:22.190 filename=/dev/nvme0n2 00:11:22.190 [job2] 00:11:22.190 filename=/dev/nvme0n3 00:11:22.190 [job3] 00:11:22.190 filename=/dev/nvme0n4 00:11:22.190 Could not set queue depth (nvme0n1) 00:11:22.190 Could not set queue depth (nvme0n2) 00:11:22.190 Could not set queue depth (nvme0n3) 00:11:22.190 Could not set queue depth (nvme0n4) 00:11:22.190 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:22.190 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:22.190 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:22.190 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:22.190 fio-3.35 00:11:22.190 Starting 4 threads 00:11:25.475 06:50:29 -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:25.475 fio: pid=75777, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:25.475 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=62930944, buflen=4096 00:11:25.475 06:50:29 -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:25.475 fio: pid=75776, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:25.475 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=67780608, buflen=4096 00:11:25.475 06:50:29 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:25.475 06:50:29 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:25.734 fio: pid=75774, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:25.734 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=54890496, buflen=4096 00:11:25.734 06:50:30 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:25.734 06:50:30 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:25.992 fio: pid=75775, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:25.992 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=61120512, buflen=4096 00:11:25.992 06:50:30 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:25.992 06:50:30 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:25.992 00:11:25.992 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=75774: Fri Dec 13 06:50:30 2024 00:11:25.992 read: IOPS=3934, BW=15.4MiB/s (16.1MB/s)(52.3MiB/3406msec) 00:11:25.992 slat (usec): min=11, max=16610, avg=17.38, stdev=207.44 00:11:25.992 clat (usec): min=3, max=2516, avg=235.28, stdev=38.49 00:11:25.992 lat (usec): min=130, max=16828, avg=252.65, stdev=211.58 00:11:25.992 clat percentiles (usec): 00:11:25.992 | 1.00th=[ 155], 5.00th=[ 202], 10.00th=[ 210], 20.00th=[ 219], 00:11:25.992 | 30.00th=[ 225], 40.00th=[ 231], 50.00th=[ 235], 60.00th=[ 239], 00:11:25.992 | 70.00th=[ 245], 80.00th=[ 251], 90.00th=[ 262], 95.00th=[ 269], 00:11:25.992 | 99.00th=[ 302], 99.50th=[ 334], 99.90th=[ 490], 99.95th=[ 635], 00:11:25.992 | 99.99th=[ 1680] 00:11:25.992 bw ( KiB/s): min=15416, max=15992, per=23.91%, avg=15773.33, stdev=199.75, samples=6 00:11:25.992 iops : min= 3854, max= 3998, avg=3943.33, stdev=49.94, samples=6 00:11:25.992 lat (usec) : 4=0.01%, 100=0.01%, 250=78.48%, 500=21.40%, 750=0.05% 00:11:25.992 lat (usec) : 1000=0.01% 00:11:25.992 lat (msec) : 2=0.02%, 4=0.01% 00:11:25.992 cpu : usr=1.29%, sys=4.79%, ctx=13415, majf=0, minf=1 00:11:25.992 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:25.992 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:25.992 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:25.992 issued rwts: total=13402,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:25.992 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:25.992 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=75775: Fri Dec 13 06:50:30 2024 00:11:25.992 read: IOPS=4085, BW=16.0MiB/s (16.7MB/s)(58.3MiB/3653msec) 00:11:25.992 slat (usec): min=10, max=11405, avg=17.24, stdev=176.57 00:11:25.992 clat (usec): min=55, max=1692, avg=226.18, stdev=45.36 00:11:25.992 lat (usec): min=125, max=11689, avg=243.42, stdev=183.21 00:11:25.992 clat percentiles (usec): 00:11:25.992 | 1.00th=[ 126], 5.00th=[ 143], 10.00th=[ 169], 20.00th=[ 210], 00:11:25.992 | 30.00th=[ 219], 40.00th=[ 227], 50.00th=[ 231], 60.00th=[ 237], 00:11:25.992 | 70.00th=[ 243], 80.00th=[ 249], 90.00th=[ 260], 95.00th=[ 269], 00:11:25.992 | 99.00th=[ 293], 99.50th=[ 306], 99.90th=[ 523], 99.95th=[ 914], 00:11:25.992 | 99.99th=[ 1614] 00:11:25.992 bw ( KiB/s): min=15712, max=18586, per=24.58%, avg=16214.00, stdev=1050.13, samples=7 00:11:25.992 iops : min= 3928, max= 4646, avg=4053.43, stdev=262.35, samples=7 00:11:25.992 lat (usec) : 100=0.01%, 250=80.73%, 500=19.14%, 750=0.03%, 1000=0.03% 00:11:25.992 lat (msec) : 2=0.05% 00:11:25.992 cpu : usr=1.51%, sys=4.79%, ctx=14932, majf=0, minf=1 00:11:25.992 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:25.993 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:25.993 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:25.993 issued rwts: total=14923,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:25.993 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:25.993 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=75776: Fri Dec 13 06:50:30 2024 00:11:25.993 read: IOPS=5235, BW=20.4MiB/s (21.4MB/s)(64.6MiB/3161msec) 00:11:25.993 slat (usec): min=11, max=7835, avg=16.52, stdev=84.38 00:11:25.993 clat (usec): min=126, max=7240, avg=173.02, stdev=68.25 00:11:25.993 lat (usec): min=139, max=8031, avg=189.54, stdev=108.65 00:11:25.993 clat percentiles (usec): 00:11:25.993 | 1.00th=[ 141], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 159], 00:11:25.993 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 169], 60.00th=[ 176], 00:11:25.993 | 70.00th=[ 180], 80.00th=[ 186], 90.00th=[ 194], 95.00th=[ 202], 00:11:25.993 | 99.00th=[ 217], 99.50th=[ 225], 99.90th=[ 269], 99.95th=[ 404], 00:11:25.993 | 99.99th=[ 3458] 00:11:25.993 bw ( KiB/s): min=20456, max=21416, per=31.90%, avg=21037.33, stdev=355.82, samples=6 00:11:25.993 iops : min= 5114, max= 5354, avg=5259.33, stdev=88.96, samples=6 00:11:25.993 lat (usec) : 250=99.84%, 500=0.13%, 1000=0.01% 00:11:25.993 lat (msec) : 4=0.02%, 10=0.01% 00:11:25.993 cpu : usr=1.61%, sys=6.99%, ctx=16557, majf=0, minf=2 00:11:25.993 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:25.993 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:25.993 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:25.993 issued rwts: total=16549,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:25.993 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:25.993 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=75777: Fri Dec 13 06:50:30 2024 00:11:25.993 read: IOPS=5274, BW=20.6MiB/s (21.6MB/s)(60.0MiB/2913msec) 00:11:25.993 slat (nsec): min=11237, max=66287, avg=14426.63, stdev=3511.75 00:11:25.993 clat (usec): min=131, max=1883, avg=173.91, stdev=30.16 00:11:25.993 lat (usec): min=144, max=1896, avg=188.34, stdev=30.37 00:11:25.993 clat percentiles (usec): 00:11:25.993 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 157], 00:11:25.993 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 176], 00:11:25.993 | 70.00th=[ 180], 80.00th=[ 188], 90.00th=[ 198], 95.00th=[ 208], 00:11:25.993 | 99.00th=[ 251], 99.50th=[ 269], 99.90th=[ 510], 99.95th=[ 562], 00:11:25.993 | 99.99th=[ 1336] 00:11:25.993 bw ( KiB/s): min=20248, max=21472, per=31.88%, avg=21027.20, stdev=493.82, samples=5 00:11:25.993 iops : min= 5062, max= 5368, avg=5256.80, stdev=123.46, samples=5 00:11:25.993 lat (usec) : 250=98.93%, 500=0.96%, 750=0.08%, 1000=0.01% 00:11:25.993 lat (msec) : 2=0.01% 00:11:25.993 cpu : usr=1.48%, sys=6.49%, ctx=15366, majf=0, minf=2 00:11:25.993 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:25.993 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:25.993 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:25.993 issued rwts: total=15365,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:25.993 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:25.993 00:11:25.993 Run status group 0 (all jobs): 00:11:25.993 READ: bw=64.4MiB/s (67.5MB/s), 15.4MiB/s-20.6MiB/s (16.1MB/s-21.6MB/s), io=235MiB (247MB), run=2913-3653msec 00:11:25.993 00:11:25.993 Disk stats (read/write): 00:11:25.993 nvme0n1: ios=13261/0, merge=0/0, ticks=3178/0, in_queue=3178, util=95.08% 00:11:25.993 nvme0n2: ios=14717/0, merge=0/0, ticks=3430/0, in_queue=3430, util=95.51% 00:11:25.993 nvme0n3: ios=16346/0, merge=0/0, ticks=2924/0, in_queue=2924, util=96.30% 00:11:25.993 nvme0n4: ios=15122/0, merge=0/0, ticks=2732/0, in_queue=2732, util=96.66% 00:11:26.262 06:50:30 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:26.262 06:50:30 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:26.545 06:50:30 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:26.545 06:50:30 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:26.804 06:50:31 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:26.804 06:50:31 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:27.063 06:50:31 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:27.063 06:50:31 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:27.322 06:50:31 -- target/fio.sh@69 -- # fio_status=0 00:11:27.322 06:50:31 -- target/fio.sh@70 -- # wait 75730 00:11:27.322 06:50:31 -- target/fio.sh@70 -- # fio_status=4 00:11:27.322 06:50:31 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:27.322 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.322 06:50:31 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:27.322 06:50:31 -- common/autotest_common.sh@1208 -- # local i=0 00:11:27.322 06:50:31 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:11:27.322 06:50:31 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:27.322 06:50:31 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:27.322 06:50:31 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:11:27.322 nvmf hotplug test: fio failed as expected 00:11:27.322 06:50:31 -- common/autotest_common.sh@1220 -- # return 0 00:11:27.322 06:50:31 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:27.322 06:50:31 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:27.322 06:50:31 -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:27.889 06:50:32 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:27.889 06:50:32 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:27.889 06:50:32 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:27.889 06:50:32 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:27.889 06:50:32 -- target/fio.sh@91 -- # nvmftestfini 00:11:27.889 06:50:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:27.889 06:50:32 -- nvmf/common.sh@116 -- # sync 00:11:27.889 06:50:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:27.889 06:50:32 -- nvmf/common.sh@119 -- # set +e 00:11:27.889 06:50:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:27.889 06:50:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:27.889 rmmod nvme_tcp 00:11:27.889 rmmod nvme_fabrics 00:11:27.889 rmmod nvme_keyring 00:11:27.889 06:50:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:27.889 06:50:32 -- nvmf/common.sh@123 -- # set -e 00:11:27.889 06:50:32 -- nvmf/common.sh@124 -- # return 0 00:11:27.889 06:50:32 -- nvmf/common.sh@477 -- # '[' -n 75354 ']' 00:11:27.889 06:50:32 -- nvmf/common.sh@478 -- # killprocess 75354 00:11:27.889 06:50:32 -- common/autotest_common.sh@936 -- # '[' -z 75354 ']' 00:11:27.889 06:50:32 -- common/autotest_common.sh@940 -- # kill -0 75354 00:11:27.889 06:50:32 -- common/autotest_common.sh@941 -- # uname 00:11:27.889 06:50:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:27.889 06:50:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75354 00:11:27.889 killing process with pid 75354 00:11:27.889 06:50:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:27.889 06:50:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:27.889 06:50:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75354' 00:11:27.890 06:50:32 -- common/autotest_common.sh@955 -- # kill 75354 00:11:27.890 06:50:32 -- common/autotest_common.sh@960 -- # wait 75354 00:11:27.890 06:50:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:27.890 06:50:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:27.890 06:50:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:27.890 06:50:32 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:27.890 06:50:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:27.890 06:50:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:27.890 06:50:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:27.890 06:50:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:27.890 06:50:32 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:27.890 00:11:27.890 real 0m19.258s 00:11:27.890 user 1m12.264s 00:11:27.890 sys 0m10.839s 00:11:27.890 06:50:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:27.890 ************************************ 00:11:27.890 END TEST nvmf_fio_target 00:11:27.890 ************************************ 00:11:27.890 06:50:32 -- common/autotest_common.sh@10 -- # set +x 00:11:27.890 06:50:32 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:27.890 06:50:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:27.890 06:50:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:27.890 06:50:32 -- common/autotest_common.sh@10 -- # set +x 00:11:28.149 ************************************ 00:11:28.149 START TEST nvmf_bdevio 00:11:28.149 ************************************ 00:11:28.149 06:50:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:28.149 * Looking for test storage... 00:11:28.149 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:28.149 06:50:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:28.149 06:50:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:28.149 06:50:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:28.149 06:50:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:28.149 06:50:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:28.149 06:50:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:28.149 06:50:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:28.149 06:50:32 -- scripts/common.sh@335 -- # IFS=.-: 00:11:28.149 06:50:32 -- scripts/common.sh@335 -- # read -ra ver1 00:11:28.149 06:50:32 -- scripts/common.sh@336 -- # IFS=.-: 00:11:28.149 06:50:32 -- scripts/common.sh@336 -- # read -ra ver2 00:11:28.149 06:50:32 -- scripts/common.sh@337 -- # local 'op=<' 00:11:28.149 06:50:32 -- scripts/common.sh@339 -- # ver1_l=2 00:11:28.149 06:50:32 -- scripts/common.sh@340 -- # ver2_l=1 00:11:28.149 06:50:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:28.149 06:50:32 -- scripts/common.sh@343 -- # case "$op" in 00:11:28.149 06:50:32 -- scripts/common.sh@344 -- # : 1 00:11:28.149 06:50:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:28.149 06:50:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:28.149 06:50:32 -- scripts/common.sh@364 -- # decimal 1 00:11:28.149 06:50:32 -- scripts/common.sh@352 -- # local d=1 00:11:28.149 06:50:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:28.149 06:50:32 -- scripts/common.sh@354 -- # echo 1 00:11:28.149 06:50:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:28.149 06:50:32 -- scripts/common.sh@365 -- # decimal 2 00:11:28.149 06:50:32 -- scripts/common.sh@352 -- # local d=2 00:11:28.149 06:50:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:28.149 06:50:32 -- scripts/common.sh@354 -- # echo 2 00:11:28.149 06:50:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:28.149 06:50:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:28.149 06:50:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:28.149 06:50:32 -- scripts/common.sh@367 -- # return 0 00:11:28.149 06:50:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:28.149 06:50:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:28.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.149 --rc genhtml_branch_coverage=1 00:11:28.149 --rc genhtml_function_coverage=1 00:11:28.149 --rc genhtml_legend=1 00:11:28.149 --rc geninfo_all_blocks=1 00:11:28.149 --rc geninfo_unexecuted_blocks=1 00:11:28.149 00:11:28.149 ' 00:11:28.149 06:50:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:28.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.149 --rc genhtml_branch_coverage=1 00:11:28.149 --rc genhtml_function_coverage=1 00:11:28.149 --rc genhtml_legend=1 00:11:28.149 --rc geninfo_all_blocks=1 00:11:28.149 --rc geninfo_unexecuted_blocks=1 00:11:28.149 00:11:28.149 ' 00:11:28.149 06:50:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:28.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.149 --rc genhtml_branch_coverage=1 00:11:28.149 --rc genhtml_function_coverage=1 00:11:28.149 --rc genhtml_legend=1 00:11:28.149 --rc geninfo_all_blocks=1 00:11:28.149 --rc geninfo_unexecuted_blocks=1 00:11:28.149 00:11:28.149 ' 00:11:28.149 06:50:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:28.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.149 --rc genhtml_branch_coverage=1 00:11:28.149 --rc genhtml_function_coverage=1 00:11:28.149 --rc genhtml_legend=1 00:11:28.149 --rc geninfo_all_blocks=1 00:11:28.149 --rc geninfo_unexecuted_blocks=1 00:11:28.149 00:11:28.149 ' 00:11:28.149 06:50:32 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:28.149 06:50:32 -- nvmf/common.sh@7 -- # uname -s 00:11:28.149 06:50:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:28.149 06:50:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:28.149 06:50:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:28.149 06:50:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:28.149 06:50:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:28.149 06:50:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:28.149 06:50:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:28.149 06:50:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:28.149 06:50:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:28.149 06:50:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:28.149 06:50:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:657f0c9c-3891-4064-9841-3d87a573b6e7 00:11:28.149 06:50:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=657f0c9c-3891-4064-9841-3d87a573b6e7 00:11:28.149 06:50:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:28.149 06:50:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:28.149 06:50:32 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:28.149 06:50:32 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:28.149 06:50:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:28.149 06:50:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:28.149 06:50:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:28.149 06:50:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.149 06:50:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.149 06:50:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.149 06:50:32 -- paths/export.sh@5 -- # export PATH 00:11:28.149 06:50:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.149 06:50:32 -- nvmf/common.sh@46 -- # : 0 00:11:28.149 06:50:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:28.149 06:50:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:28.149 06:50:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:28.149 06:50:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:28.149 06:50:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:28.149 06:50:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:28.149 06:50:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:28.149 06:50:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:28.149 06:50:32 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:28.149 06:50:32 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:28.149 06:50:32 -- target/bdevio.sh@14 -- # nvmftestinit 00:11:28.149 06:50:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:28.149 06:50:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:28.149 06:50:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:28.149 06:50:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:28.149 06:50:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:28.149 06:50:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:28.149 06:50:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:28.149 06:50:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:28.149 06:50:32 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:28.149 06:50:32 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:28.149 06:50:32 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:28.149 06:50:32 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:28.149 06:50:32 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:28.149 06:50:32 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:28.149 06:50:32 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:28.149 06:50:32 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:28.149 06:50:32 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:28.149 06:50:32 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:28.149 06:50:32 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:28.149 06:50:32 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:28.149 06:50:32 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:28.149 06:50:32 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:28.149 06:50:32 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:28.149 06:50:32 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:28.149 06:50:32 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:28.149 06:50:32 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:28.149 06:50:32 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:28.150 06:50:32 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:28.150 Cannot find device "nvmf_tgt_br" 00:11:28.150 06:50:32 -- nvmf/common.sh@154 -- # true 00:11:28.150 06:50:32 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:28.150 Cannot find device "nvmf_tgt_br2" 00:11:28.150 06:50:32 -- nvmf/common.sh@155 -- # true 00:11:28.150 06:50:32 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:28.150 06:50:32 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:28.408 Cannot find device "nvmf_tgt_br" 00:11:28.408 06:50:32 -- nvmf/common.sh@157 -- # true 00:11:28.408 06:50:32 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:28.408 Cannot find device "nvmf_tgt_br2" 00:11:28.408 06:50:32 -- nvmf/common.sh@158 -- # true 00:11:28.408 06:50:32 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:28.408 06:50:32 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:28.408 06:50:32 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:28.408 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:28.408 06:50:32 -- nvmf/common.sh@161 -- # true 00:11:28.408 06:50:32 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:28.408 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:28.408 06:50:32 -- nvmf/common.sh@162 -- # true 00:11:28.408 06:50:32 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:28.408 06:50:32 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:28.408 06:50:32 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:28.408 06:50:32 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:28.408 06:50:32 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:28.408 06:50:32 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:28.408 06:50:32 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:28.408 06:50:32 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:28.408 06:50:32 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:28.408 06:50:32 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:28.408 06:50:32 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:28.408 06:50:32 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:28.408 06:50:32 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:28.408 06:50:32 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:28.408 06:50:32 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:28.408 06:50:32 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:28.408 06:50:32 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:28.408 06:50:32 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:28.408 06:50:32 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:28.409 06:50:32 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:28.409 06:50:32 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:28.409 06:50:32 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:28.409 06:50:32 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:28.409 06:50:32 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:28.409 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:28.409 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:11:28.409 00:11:28.409 --- 10.0.0.2 ping statistics --- 00:11:28.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.409 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:11:28.409 06:50:32 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:28.409 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:28.409 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:11:28.409 00:11:28.409 --- 10.0.0.3 ping statistics --- 00:11:28.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.409 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:11:28.409 06:50:32 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:28.409 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:28.409 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:11:28.409 00:11:28.409 --- 10.0.0.1 ping statistics --- 00:11:28.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.409 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:11:28.409 06:50:32 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:28.409 06:50:32 -- nvmf/common.sh@421 -- # return 0 00:11:28.409 06:50:32 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:28.409 06:50:32 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:28.409 06:50:32 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:28.409 06:50:32 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:28.409 06:50:32 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:28.409 06:50:32 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:28.409 06:50:32 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:28.667 06:50:32 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:28.667 06:50:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:28.667 06:50:32 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:28.667 06:50:32 -- common/autotest_common.sh@10 -- # set +x 00:11:28.667 06:50:32 -- nvmf/common.sh@469 -- # nvmfpid=76050 00:11:28.667 06:50:32 -- nvmf/common.sh@470 -- # waitforlisten 76050 00:11:28.667 06:50:32 -- common/autotest_common.sh@829 -- # '[' -z 76050 ']' 00:11:28.667 06:50:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:28.667 06:50:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:28.667 06:50:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:28.667 06:50:32 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:28.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:28.667 06:50:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:28.667 06:50:32 -- common/autotest_common.sh@10 -- # set +x 00:11:28.667 [2024-12-13 06:50:32.982951] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:28.667 [2024-12-13 06:50:32.983022] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:28.667 [2024-12-13 06:50:33.120172] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:28.667 [2024-12-13 06:50:33.155269] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:28.667 [2024-12-13 06:50:33.155462] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:28.667 [2024-12-13 06:50:33.155477] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:28.667 [2024-12-13 06:50:33.155486] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:28.667 [2024-12-13 06:50:33.155549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:11:28.667 [2024-12-13 06:50:33.155672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:11:28.667 [2024-12-13 06:50:33.155979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:11:28.667 [2024-12-13 06:50:33.156096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:29.602 06:50:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:29.602 06:50:33 -- common/autotest_common.sh@862 -- # return 0 00:11:29.602 06:50:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:29.602 06:50:33 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:29.602 06:50:33 -- common/autotest_common.sh@10 -- # set +x 00:11:29.602 06:50:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:29.602 06:50:34 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:29.602 06:50:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.602 06:50:34 -- common/autotest_common.sh@10 -- # set +x 00:11:29.602 [2024-12-13 06:50:34.013196] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:29.602 06:50:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.602 06:50:34 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:29.602 06:50:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.602 06:50:34 -- common/autotest_common.sh@10 -- # set +x 00:11:29.602 Malloc0 00:11:29.602 06:50:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.602 06:50:34 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:29.602 06:50:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.602 06:50:34 -- common/autotest_common.sh@10 -- # set +x 00:11:29.602 06:50:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.602 06:50:34 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:29.602 06:50:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.602 06:50:34 -- common/autotest_common.sh@10 -- # set +x 00:11:29.602 06:50:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.602 06:50:34 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:29.602 06:50:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.602 06:50:34 -- common/autotest_common.sh@10 -- # set +x 00:11:29.602 [2024-12-13 06:50:34.075459] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:29.602 06:50:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.602 06:50:34 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:29.602 06:50:34 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:29.602 06:50:34 -- nvmf/common.sh@520 -- # config=() 00:11:29.602 06:50:34 -- nvmf/common.sh@520 -- # local subsystem config 00:11:29.602 06:50:34 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:11:29.602 06:50:34 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:11:29.602 { 00:11:29.602 "params": { 00:11:29.602 "name": "Nvme$subsystem", 00:11:29.602 "trtype": "$TEST_TRANSPORT", 00:11:29.602 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:29.602 "adrfam": "ipv4", 00:11:29.602 "trsvcid": "$NVMF_PORT", 00:11:29.602 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:29.602 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:29.602 "hdgst": ${hdgst:-false}, 00:11:29.602 "ddgst": ${ddgst:-false} 00:11:29.602 }, 00:11:29.602 "method": "bdev_nvme_attach_controller" 00:11:29.602 } 00:11:29.602 EOF 00:11:29.602 )") 00:11:29.602 06:50:34 -- nvmf/common.sh@542 -- # cat 00:11:29.602 06:50:34 -- nvmf/common.sh@544 -- # jq . 00:11:29.602 06:50:34 -- nvmf/common.sh@545 -- # IFS=, 00:11:29.602 06:50:34 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:11:29.602 "params": { 00:11:29.602 "name": "Nvme1", 00:11:29.602 "trtype": "tcp", 00:11:29.602 "traddr": "10.0.0.2", 00:11:29.602 "adrfam": "ipv4", 00:11:29.602 "trsvcid": "4420", 00:11:29.602 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:29.602 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:29.602 "hdgst": false, 00:11:29.602 "ddgst": false 00:11:29.602 }, 00:11:29.602 "method": "bdev_nvme_attach_controller" 00:11:29.602 }' 00:11:29.861 [2024-12-13 06:50:34.129653] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:29.861 [2024-12-13 06:50:34.129742] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76086 ] 00:11:29.861 [2024-12-13 06:50:34.270093] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:29.861 [2024-12-13 06:50:34.310928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:29.861 [2024-12-13 06:50:34.310833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:29.861 [2024-12-13 06:50:34.310919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:30.119 [2024-12-13 06:50:34.445673] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:11:30.119 [2024-12-13 06:50:34.445731] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:11:30.119 I/O targets: 00:11:30.119 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:30.119 00:11:30.119 00:11:30.119 CUnit - A unit testing framework for C - Version 2.1-3 00:11:30.119 http://cunit.sourceforge.net/ 00:11:30.119 00:11:30.119 00:11:30.119 Suite: bdevio tests on: Nvme1n1 00:11:30.119 Test: blockdev write read block ...passed 00:11:30.119 Test: blockdev write zeroes read block ...passed 00:11:30.119 Test: blockdev write zeroes read no split ...passed 00:11:30.119 Test: blockdev write zeroes read split ...passed 00:11:30.119 Test: blockdev write zeroes read split partial ...passed 00:11:30.119 Test: blockdev reset ...[2024-12-13 06:50:34.479220] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:11:30.119 [2024-12-13 06:50:34.479328] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bccea0 (9): Bad file descriptor 00:11:30.119 passed 00:11:30.119 Test: blockdev write read 8 blocks ...[2024-12-13 06:50:34.495970] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:30.119 passed 00:11:30.119 Test: blockdev write read size > 128k ...passed 00:11:30.119 Test: blockdev write read invalid size ...passed 00:11:30.119 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:30.119 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:30.119 Test: blockdev write read max offset ...passed 00:11:30.119 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:30.119 Test: blockdev writev readv 8 blocks ...passed 00:11:30.119 Test: blockdev writev readv 30 x 1block ...passed 00:11:30.119 Test: blockdev writev readv block ...passed 00:11:30.119 Test: blockdev writev readv size > 128k ...passed 00:11:30.120 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:30.120 Test: blockdev comparev and writev ...[2024-12-13 06:50:34.503597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:30.120 [2024-12-13 06:50:34.503654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:30.120 [2024-12-13 06:50:34.503681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:30.120 [2024-12-13 06:50:34.503695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:30.120 passed 00:11:30.120 Test: blockdev nvme passthru rw ...[2024-12-13 06:50:34.504120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:30.120 [2024-12-13 06:50:34.504149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:30.120 [2024-12-13 06:50:34.504171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:30.120 [2024-12-13 06:50:34.504184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:30.120 [2024-12-13 06:50:34.504488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:30.120 [2024-12-13 06:50:34.504510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:30.120 [2024-12-13 06:50:34.504531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:30.120 [2024-12-13 06:50:34.504543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:30.120 [2024-12-13 06:50:34.504829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:30.120 [2024-12-13 06:50:34.504849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:30.120 [2024-12-13 06:50:34.504869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:30.120 [2024-12-13 06:50:34.504882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:30.120 passed 00:11:30.120 Test: blockdev nvme passthru vendor specific ...[2024-12-13 06:50:34.505749] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:30.120 passed 00:11:30.120 Test: blockdev nvme admin passthru ...[2024-12-13 06:50:34.505779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:30.120 [2024-12-13 06:50:34.505907] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:30.120 [2024-12-13 06:50:34.505926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:30.120 [2024-12-13 06:50:34.506045] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:30.120 [2024-12-13 06:50:34.506064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:30.120 [2024-12-13 06:50:34.506180] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:30.120 [2024-12-13 06:50:34.506200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:30.120 passed 00:11:30.120 Test: blockdev copy ...passed 00:11:30.120 00:11:30.120 Run Summary: Type Total Ran Passed Failed Inactive 00:11:30.120 suites 1 1 n/a 0 0 00:11:30.120 tests 23 23 23 0 0 00:11:30.120 asserts 152 152 152 0 n/a 00:11:30.120 00:11:30.120 Elapsed time = 0.149 seconds 00:11:30.379 06:50:34 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:30.379 06:50:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.379 06:50:34 -- common/autotest_common.sh@10 -- # set +x 00:11:30.379 06:50:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.379 06:50:34 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:30.379 06:50:34 -- target/bdevio.sh@30 -- # nvmftestfini 00:11:30.379 06:50:34 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:30.379 06:50:34 -- nvmf/common.sh@116 -- # sync 00:11:30.379 06:50:34 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:30.379 06:50:34 -- nvmf/common.sh@119 -- # set +e 00:11:30.379 06:50:34 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:30.379 06:50:34 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:30.379 rmmod nvme_tcp 00:11:30.379 rmmod nvme_fabrics 00:11:30.379 rmmod nvme_keyring 00:11:30.379 06:50:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:30.379 06:50:34 -- nvmf/common.sh@123 -- # set -e 00:11:30.379 06:50:34 -- nvmf/common.sh@124 -- # return 0 00:11:30.379 06:50:34 -- nvmf/common.sh@477 -- # '[' -n 76050 ']' 00:11:30.379 06:50:34 -- nvmf/common.sh@478 -- # killprocess 76050 00:11:30.379 06:50:34 -- common/autotest_common.sh@936 -- # '[' -z 76050 ']' 00:11:30.379 06:50:34 -- common/autotest_common.sh@940 -- # kill -0 76050 00:11:30.379 06:50:34 -- common/autotest_common.sh@941 -- # uname 00:11:30.379 06:50:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:30.379 06:50:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76050 00:11:30.379 killing process with pid 76050 00:11:30.379 06:50:34 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:11:30.379 06:50:34 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:11:30.379 06:50:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76050' 00:11:30.379 06:50:34 -- common/autotest_common.sh@955 -- # kill 76050 00:11:30.379 06:50:34 -- common/autotest_common.sh@960 -- # wait 76050 00:11:30.638 06:50:34 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:30.638 06:50:34 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:30.638 06:50:34 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:30.638 06:50:34 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:30.638 06:50:34 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:30.638 06:50:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:30.638 06:50:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:30.638 06:50:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:30.638 06:50:34 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:30.638 ************************************ 00:11:30.638 END TEST nvmf_bdevio 00:11:30.638 ************************************ 00:11:30.638 00:11:30.638 real 0m2.586s 00:11:30.638 user 0m8.415s 00:11:30.638 sys 0m0.682s 00:11:30.638 06:50:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:30.638 06:50:35 -- common/autotest_common.sh@10 -- # set +x 00:11:30.638 06:50:35 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:11:30.638 06:50:35 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:11:30.638 06:50:35 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:11:30.638 06:50:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:30.638 06:50:35 -- common/autotest_common.sh@10 -- # set +x 00:11:30.638 ************************************ 00:11:30.638 START TEST nvmf_bdevio_no_huge 00:11:30.638 ************************************ 00:11:30.638 06:50:35 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:11:30.638 * Looking for test storage... 00:11:30.638 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:30.638 06:50:35 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:30.638 06:50:35 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:30.638 06:50:35 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:30.896 06:50:35 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:30.896 06:50:35 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:30.896 06:50:35 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:30.896 06:50:35 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:30.896 06:50:35 -- scripts/common.sh@335 -- # IFS=.-: 00:11:30.896 06:50:35 -- scripts/common.sh@335 -- # read -ra ver1 00:11:30.896 06:50:35 -- scripts/common.sh@336 -- # IFS=.-: 00:11:30.896 06:50:35 -- scripts/common.sh@336 -- # read -ra ver2 00:11:30.896 06:50:35 -- scripts/common.sh@337 -- # local 'op=<' 00:11:30.896 06:50:35 -- scripts/common.sh@339 -- # ver1_l=2 00:11:30.896 06:50:35 -- scripts/common.sh@340 -- # ver2_l=1 00:11:30.896 06:50:35 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:30.896 06:50:35 -- scripts/common.sh@343 -- # case "$op" in 00:11:30.896 06:50:35 -- scripts/common.sh@344 -- # : 1 00:11:30.896 06:50:35 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:30.896 06:50:35 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:30.896 06:50:35 -- scripts/common.sh@364 -- # decimal 1 00:11:30.896 06:50:35 -- scripts/common.sh@352 -- # local d=1 00:11:30.896 06:50:35 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:30.896 06:50:35 -- scripts/common.sh@354 -- # echo 1 00:11:30.896 06:50:35 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:30.896 06:50:35 -- scripts/common.sh@365 -- # decimal 2 00:11:30.896 06:50:35 -- scripts/common.sh@352 -- # local d=2 00:11:30.896 06:50:35 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:30.896 06:50:35 -- scripts/common.sh@354 -- # echo 2 00:11:30.896 06:50:35 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:30.896 06:50:35 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:30.896 06:50:35 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:30.896 06:50:35 -- scripts/common.sh@367 -- # return 0 00:11:30.896 06:50:35 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:30.896 06:50:35 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:30.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.896 --rc genhtml_branch_coverage=1 00:11:30.896 --rc genhtml_function_coverage=1 00:11:30.896 --rc genhtml_legend=1 00:11:30.896 --rc geninfo_all_blocks=1 00:11:30.896 --rc geninfo_unexecuted_blocks=1 00:11:30.896 00:11:30.896 ' 00:11:30.896 06:50:35 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:30.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.897 --rc genhtml_branch_coverage=1 00:11:30.897 --rc genhtml_function_coverage=1 00:11:30.897 --rc genhtml_legend=1 00:11:30.897 --rc geninfo_all_blocks=1 00:11:30.897 --rc geninfo_unexecuted_blocks=1 00:11:30.897 00:11:30.897 ' 00:11:30.897 06:50:35 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:30.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.897 --rc genhtml_branch_coverage=1 00:11:30.897 --rc genhtml_function_coverage=1 00:11:30.897 --rc genhtml_legend=1 00:11:30.897 --rc geninfo_all_blocks=1 00:11:30.897 --rc geninfo_unexecuted_blocks=1 00:11:30.897 00:11:30.897 ' 00:11:30.897 06:50:35 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:30.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.897 --rc genhtml_branch_coverage=1 00:11:30.897 --rc genhtml_function_coverage=1 00:11:30.897 --rc genhtml_legend=1 00:11:30.897 --rc geninfo_all_blocks=1 00:11:30.897 --rc geninfo_unexecuted_blocks=1 00:11:30.897 00:11:30.897 ' 00:11:30.897 06:50:35 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:30.897 06:50:35 -- nvmf/common.sh@7 -- # uname -s 00:11:30.897 06:50:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:30.897 06:50:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:30.897 06:50:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:30.897 06:50:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:30.897 06:50:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:30.897 06:50:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:30.897 06:50:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:30.897 06:50:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:30.897 06:50:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:30.897 06:50:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:30.897 06:50:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:657f0c9c-3891-4064-9841-3d87a573b6e7 00:11:30.897 06:50:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=657f0c9c-3891-4064-9841-3d87a573b6e7 00:11:30.897 06:50:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:30.897 06:50:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:30.897 06:50:35 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:30.897 06:50:35 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:30.897 06:50:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:30.897 06:50:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:30.897 06:50:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:30.897 06:50:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.897 06:50:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.897 06:50:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.897 06:50:35 -- paths/export.sh@5 -- # export PATH 00:11:30.897 06:50:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.897 06:50:35 -- nvmf/common.sh@46 -- # : 0 00:11:30.897 06:50:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:30.897 06:50:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:30.897 06:50:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:30.897 06:50:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:30.897 06:50:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:30.897 06:50:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:30.897 06:50:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:30.897 06:50:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:30.897 06:50:35 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:30.897 06:50:35 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:30.897 06:50:35 -- target/bdevio.sh@14 -- # nvmftestinit 00:11:30.897 06:50:35 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:30.897 06:50:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:30.897 06:50:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:30.897 06:50:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:30.897 06:50:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:30.897 06:50:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:30.897 06:50:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:30.897 06:50:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:30.897 06:50:35 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:30.897 06:50:35 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:30.897 06:50:35 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:30.897 06:50:35 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:30.897 06:50:35 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:30.897 06:50:35 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:30.897 06:50:35 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:30.897 06:50:35 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:30.897 06:50:35 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:30.897 06:50:35 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:30.897 06:50:35 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:30.897 06:50:35 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:30.897 06:50:35 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:30.897 06:50:35 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:30.897 06:50:35 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:30.897 06:50:35 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:30.897 06:50:35 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:30.897 06:50:35 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:30.897 06:50:35 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:30.897 06:50:35 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:30.897 Cannot find device "nvmf_tgt_br" 00:11:30.897 06:50:35 -- nvmf/common.sh@154 -- # true 00:11:30.897 06:50:35 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:30.897 Cannot find device "nvmf_tgt_br2" 00:11:30.897 06:50:35 -- nvmf/common.sh@155 -- # true 00:11:30.897 06:50:35 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:30.897 06:50:35 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:30.897 Cannot find device "nvmf_tgt_br" 00:11:30.897 06:50:35 -- nvmf/common.sh@157 -- # true 00:11:30.897 06:50:35 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:30.897 Cannot find device "nvmf_tgt_br2" 00:11:30.897 06:50:35 -- nvmf/common.sh@158 -- # true 00:11:30.897 06:50:35 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:30.897 06:50:35 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:30.897 06:50:35 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:30.897 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:30.897 06:50:35 -- nvmf/common.sh@161 -- # true 00:11:30.897 06:50:35 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:30.897 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:30.897 06:50:35 -- nvmf/common.sh@162 -- # true 00:11:30.897 06:50:35 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:30.897 06:50:35 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:30.897 06:50:35 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:30.897 06:50:35 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:31.155 06:50:35 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:31.155 06:50:35 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:31.155 06:50:35 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:31.155 06:50:35 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:31.155 06:50:35 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:31.155 06:50:35 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:31.155 06:50:35 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:31.155 06:50:35 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:31.156 06:50:35 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:31.156 06:50:35 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:31.156 06:50:35 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:31.156 06:50:35 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:31.156 06:50:35 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:31.156 06:50:35 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:31.156 06:50:35 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:31.156 06:50:35 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:31.156 06:50:35 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:31.156 06:50:35 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:31.156 06:50:35 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:31.156 06:50:35 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:31.156 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:31.156 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:11:31.156 00:11:31.156 --- 10.0.0.2 ping statistics --- 00:11:31.156 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:31.156 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:11:31.156 06:50:35 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:31.156 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:31.156 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:11:31.156 00:11:31.156 --- 10.0.0.3 ping statistics --- 00:11:31.156 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:31.156 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:11:31.156 06:50:35 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:31.156 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:31.156 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:11:31.156 00:11:31.156 --- 10.0.0.1 ping statistics --- 00:11:31.156 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:31.156 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:11:31.156 06:50:35 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:31.156 06:50:35 -- nvmf/common.sh@421 -- # return 0 00:11:31.156 06:50:35 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:31.156 06:50:35 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:31.156 06:50:35 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:31.156 06:50:35 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:31.156 06:50:35 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:31.156 06:50:35 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:31.156 06:50:35 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:31.156 06:50:35 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:31.156 06:50:35 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:31.156 06:50:35 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:31.156 06:50:35 -- common/autotest_common.sh@10 -- # set +x 00:11:31.156 06:50:35 -- nvmf/common.sh@469 -- # nvmfpid=76266 00:11:31.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:31.156 06:50:35 -- nvmf/common.sh@470 -- # waitforlisten 76266 00:11:31.156 06:50:35 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:11:31.156 06:50:35 -- common/autotest_common.sh@829 -- # '[' -z 76266 ']' 00:11:31.156 06:50:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:31.156 06:50:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:31.156 06:50:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:31.156 06:50:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:31.156 06:50:35 -- common/autotest_common.sh@10 -- # set +x 00:11:31.156 [2024-12-13 06:50:35.640759] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:31.156 [2024-12-13 06:50:35.640847] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:11:31.414 [2024-12-13 06:50:35.779309] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:31.414 [2024-12-13 06:50:35.882564] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:31.414 [2024-12-13 06:50:35.882771] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:31.414 [2024-12-13 06:50:35.882786] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:31.414 [2024-12-13 06:50:35.882797] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:31.414 [2024-12-13 06:50:35.882962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:11:31.414 [2024-12-13 06:50:35.883520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:11:31.414 [2024-12-13 06:50:35.883675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:11:31.414 [2024-12-13 06:50:35.883675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:32.348 06:50:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:32.348 06:50:36 -- common/autotest_common.sh@862 -- # return 0 00:11:32.348 06:50:36 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:32.348 06:50:36 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:32.348 06:50:36 -- common/autotest_common.sh@10 -- # set +x 00:11:32.348 06:50:36 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:32.349 06:50:36 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:32.349 06:50:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.349 06:50:36 -- common/autotest_common.sh@10 -- # set +x 00:11:32.349 [2024-12-13 06:50:36.692098] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:32.349 06:50:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.349 06:50:36 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:32.349 06:50:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.349 06:50:36 -- common/autotest_common.sh@10 -- # set +x 00:11:32.349 Malloc0 00:11:32.349 06:50:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.349 06:50:36 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:32.349 06:50:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.349 06:50:36 -- common/autotest_common.sh@10 -- # set +x 00:11:32.349 06:50:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.349 06:50:36 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:32.349 06:50:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.349 06:50:36 -- common/autotest_common.sh@10 -- # set +x 00:11:32.349 06:50:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.349 06:50:36 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:32.349 06:50:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.349 06:50:36 -- common/autotest_common.sh@10 -- # set +x 00:11:32.349 [2024-12-13 06:50:36.736292] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:32.349 06:50:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.349 06:50:36 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:11:32.349 06:50:36 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:32.349 06:50:36 -- nvmf/common.sh@520 -- # config=() 00:11:32.349 06:50:36 -- nvmf/common.sh@520 -- # local subsystem config 00:11:32.349 06:50:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:11:32.349 06:50:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:11:32.349 { 00:11:32.349 "params": { 00:11:32.349 "name": "Nvme$subsystem", 00:11:32.349 "trtype": "$TEST_TRANSPORT", 00:11:32.349 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:32.349 "adrfam": "ipv4", 00:11:32.349 "trsvcid": "$NVMF_PORT", 00:11:32.349 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:32.349 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:32.349 "hdgst": ${hdgst:-false}, 00:11:32.349 "ddgst": ${ddgst:-false} 00:11:32.349 }, 00:11:32.349 "method": "bdev_nvme_attach_controller" 00:11:32.349 } 00:11:32.349 EOF 00:11:32.349 )") 00:11:32.349 06:50:36 -- nvmf/common.sh@542 -- # cat 00:11:32.349 06:50:36 -- nvmf/common.sh@544 -- # jq . 00:11:32.349 06:50:36 -- nvmf/common.sh@545 -- # IFS=, 00:11:32.349 06:50:36 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:11:32.349 "params": { 00:11:32.349 "name": "Nvme1", 00:11:32.349 "trtype": "tcp", 00:11:32.349 "traddr": "10.0.0.2", 00:11:32.349 "adrfam": "ipv4", 00:11:32.349 "trsvcid": "4420", 00:11:32.349 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:32.349 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:32.349 "hdgst": false, 00:11:32.349 "ddgst": false 00:11:32.349 }, 00:11:32.349 "method": "bdev_nvme_attach_controller" 00:11:32.349 }' 00:11:32.349 [2024-12-13 06:50:36.790660] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:32.349 [2024-12-13 06:50:36.790754] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid76302 ] 00:11:32.608 [2024-12-13 06:50:36.930803] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:32.608 [2024-12-13 06:50:37.031982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:32.608 [2024-12-13 06:50:37.032131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:32.608 [2024-12-13 06:50:37.032138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.867 [2024-12-13 06:50:37.193020] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:11:32.867 [2024-12-13 06:50:37.193057] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:11:32.867 I/O targets: 00:11:32.867 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:32.867 00:11:32.867 00:11:32.867 CUnit - A unit testing framework for C - Version 2.1-3 00:11:32.867 http://cunit.sourceforge.net/ 00:11:32.867 00:11:32.867 00:11:32.867 Suite: bdevio tests on: Nvme1n1 00:11:32.867 Test: blockdev write read block ...passed 00:11:32.867 Test: blockdev write zeroes read block ...passed 00:11:32.867 Test: blockdev write zeroes read no split ...passed 00:11:32.867 Test: blockdev write zeroes read split ...passed 00:11:32.867 Test: blockdev write zeroes read split partial ...passed 00:11:32.867 Test: blockdev reset ...[2024-12-13 06:50:37.232257] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:11:32.867 [2024-12-13 06:50:37.232370] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d8e260 (9): Bad file descriptor 00:11:32.867 [2024-12-13 06:50:37.242932] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:32.867 passed 00:11:32.867 Test: blockdev write read 8 blocks ...passed 00:11:32.867 Test: blockdev write read size > 128k ...passed 00:11:32.867 Test: blockdev write read invalid size ...passed 00:11:32.867 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:32.867 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:32.867 Test: blockdev write read max offset ...passed 00:11:32.867 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:32.867 Test: blockdev writev readv 8 blocks ...passed 00:11:32.867 Test: blockdev writev readv 30 x 1block ...passed 00:11:32.867 Test: blockdev writev readv block ...passed 00:11:32.867 Test: blockdev writev readv size > 128k ...passed 00:11:32.867 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:32.867 Test: blockdev comparev and writev ...[2024-12-13 06:50:37.250881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:32.867 [2024-12-13 06:50:37.250961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:32.867 [2024-12-13 06:50:37.251000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:32.867 [2024-12-13 06:50:37.251011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:32.867 [2024-12-13 06:50:37.251403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:32.867 [2024-12-13 06:50:37.251432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:32.867 [2024-12-13 06:50:37.251450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:32.867 [2024-12-13 06:50:37.251461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:32.867 [2024-12-13 06:50:37.251859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:32.867 [2024-12-13 06:50:37.251900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:32.867 [2024-12-13 06:50:37.251919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:32.867 [2024-12-13 06:50:37.251930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:32.867 [2024-12-13 06:50:37.252230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:32.867 [2024-12-13 06:50:37.252261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:32.867 [2024-12-13 06:50:37.252280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:32.867 [2024-12-13 06:50:37.252291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:32.867 passed 00:11:32.867 Test: blockdev nvme passthru rw ...passed 00:11:32.867 Test: blockdev nvme passthru vendor specific ...[2024-12-13 06:50:37.253294] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:32.867 [2024-12-13 06:50:37.253319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:32.867 [2024-12-13 06:50:37.253448] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:32.867 [2024-12-13 06:50:37.253465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:32.867 [2024-12-13 06:50:37.253573] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:32.867 [2024-12-13 06:50:37.253595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:32.867 [2024-12-13 06:50:37.253706] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:32.867 [2024-12-13 06:50:37.253735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:32.867 passed 00:11:32.867 Test: blockdev nvme admin passthru ...passed 00:11:32.867 Test: blockdev copy ...passed 00:11:32.867 00:11:32.867 Run Summary: Type Total Ran Passed Failed Inactive 00:11:32.867 suites 1 1 n/a 0 0 00:11:32.867 tests 23 23 23 0 0 00:11:32.867 asserts 152 152 152 0 n/a 00:11:32.867 00:11:32.867 Elapsed time = 0.162 seconds 00:11:33.126 06:50:37 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:33.126 06:50:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.126 06:50:37 -- common/autotest_common.sh@10 -- # set +x 00:11:33.126 06:50:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.126 06:50:37 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:33.126 06:50:37 -- target/bdevio.sh@30 -- # nvmftestfini 00:11:33.126 06:50:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:33.126 06:50:37 -- nvmf/common.sh@116 -- # sync 00:11:33.126 06:50:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:33.126 06:50:37 -- nvmf/common.sh@119 -- # set +e 00:11:33.126 06:50:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:33.126 06:50:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:33.126 rmmod nvme_tcp 00:11:33.126 rmmod nvme_fabrics 00:11:33.126 rmmod nvme_keyring 00:11:33.385 06:50:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:33.385 06:50:37 -- nvmf/common.sh@123 -- # set -e 00:11:33.385 06:50:37 -- nvmf/common.sh@124 -- # return 0 00:11:33.385 06:50:37 -- nvmf/common.sh@477 -- # '[' -n 76266 ']' 00:11:33.385 06:50:37 -- nvmf/common.sh@478 -- # killprocess 76266 00:11:33.385 06:50:37 -- common/autotest_common.sh@936 -- # '[' -z 76266 ']' 00:11:33.385 06:50:37 -- common/autotest_common.sh@940 -- # kill -0 76266 00:11:33.385 06:50:37 -- common/autotest_common.sh@941 -- # uname 00:11:33.385 06:50:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:33.385 06:50:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76266 00:11:33.385 killing process with pid 76266 00:11:33.385 06:50:37 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:11:33.385 06:50:37 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:11:33.385 06:50:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76266' 00:11:33.385 06:50:37 -- common/autotest_common.sh@955 -- # kill 76266 00:11:33.385 06:50:37 -- common/autotest_common.sh@960 -- # wait 76266 00:11:33.644 06:50:38 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:33.644 06:50:38 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:33.644 06:50:38 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:33.644 06:50:38 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:33.644 06:50:38 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:33.644 06:50:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:33.644 06:50:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:33.644 06:50:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:33.644 06:50:38 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:33.644 00:11:33.644 real 0m2.994s 00:11:33.644 user 0m9.652s 00:11:33.644 sys 0m1.164s 00:11:33.644 ************************************ 00:11:33.644 END TEST nvmf_bdevio_no_huge 00:11:33.644 ************************************ 00:11:33.644 06:50:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:33.644 06:50:38 -- common/autotest_common.sh@10 -- # set +x 00:11:33.644 06:50:38 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:11:33.644 06:50:38 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:33.644 06:50:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:33.644 06:50:38 -- common/autotest_common.sh@10 -- # set +x 00:11:33.644 ************************************ 00:11:33.644 START TEST nvmf_tls 00:11:33.644 ************************************ 00:11:33.644 06:50:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:11:33.903 * Looking for test storage... 00:11:33.903 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:33.903 06:50:38 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:33.903 06:50:38 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:33.903 06:50:38 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:33.903 06:50:38 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:33.903 06:50:38 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:33.903 06:50:38 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:33.903 06:50:38 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:33.903 06:50:38 -- scripts/common.sh@335 -- # IFS=.-: 00:11:33.903 06:50:38 -- scripts/common.sh@335 -- # read -ra ver1 00:11:33.903 06:50:38 -- scripts/common.sh@336 -- # IFS=.-: 00:11:33.903 06:50:38 -- scripts/common.sh@336 -- # read -ra ver2 00:11:33.903 06:50:38 -- scripts/common.sh@337 -- # local 'op=<' 00:11:33.903 06:50:38 -- scripts/common.sh@339 -- # ver1_l=2 00:11:33.903 06:50:38 -- scripts/common.sh@340 -- # ver2_l=1 00:11:33.903 06:50:38 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:33.903 06:50:38 -- scripts/common.sh@343 -- # case "$op" in 00:11:33.903 06:50:38 -- scripts/common.sh@344 -- # : 1 00:11:33.903 06:50:38 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:33.903 06:50:38 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:33.903 06:50:38 -- scripts/common.sh@364 -- # decimal 1 00:11:33.903 06:50:38 -- scripts/common.sh@352 -- # local d=1 00:11:33.903 06:50:38 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:33.903 06:50:38 -- scripts/common.sh@354 -- # echo 1 00:11:33.903 06:50:38 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:33.903 06:50:38 -- scripts/common.sh@365 -- # decimal 2 00:11:33.903 06:50:38 -- scripts/common.sh@352 -- # local d=2 00:11:33.904 06:50:38 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:33.904 06:50:38 -- scripts/common.sh@354 -- # echo 2 00:11:33.904 06:50:38 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:33.904 06:50:38 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:33.904 06:50:38 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:33.904 06:50:38 -- scripts/common.sh@367 -- # return 0 00:11:33.904 06:50:38 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:33.904 06:50:38 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:33.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.904 --rc genhtml_branch_coverage=1 00:11:33.904 --rc genhtml_function_coverage=1 00:11:33.904 --rc genhtml_legend=1 00:11:33.904 --rc geninfo_all_blocks=1 00:11:33.904 --rc geninfo_unexecuted_blocks=1 00:11:33.904 00:11:33.904 ' 00:11:33.904 06:50:38 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:33.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.904 --rc genhtml_branch_coverage=1 00:11:33.904 --rc genhtml_function_coverage=1 00:11:33.904 --rc genhtml_legend=1 00:11:33.904 --rc geninfo_all_blocks=1 00:11:33.904 --rc geninfo_unexecuted_blocks=1 00:11:33.904 00:11:33.904 ' 00:11:33.904 06:50:38 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:33.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.904 --rc genhtml_branch_coverage=1 00:11:33.904 --rc genhtml_function_coverage=1 00:11:33.904 --rc genhtml_legend=1 00:11:33.904 --rc geninfo_all_blocks=1 00:11:33.904 --rc geninfo_unexecuted_blocks=1 00:11:33.904 00:11:33.904 ' 00:11:33.904 06:50:38 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:33.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.904 --rc genhtml_branch_coverage=1 00:11:33.904 --rc genhtml_function_coverage=1 00:11:33.904 --rc genhtml_legend=1 00:11:33.904 --rc geninfo_all_blocks=1 00:11:33.904 --rc geninfo_unexecuted_blocks=1 00:11:33.904 00:11:33.904 ' 00:11:33.904 06:50:38 -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:33.904 06:50:38 -- nvmf/common.sh@7 -- # uname -s 00:11:33.904 06:50:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:33.904 06:50:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:33.904 06:50:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:33.904 06:50:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:33.904 06:50:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:33.904 06:50:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:33.904 06:50:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:33.904 06:50:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:33.904 06:50:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:33.904 06:50:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:33.904 06:50:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:657f0c9c-3891-4064-9841-3d87a573b6e7 00:11:33.904 06:50:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=657f0c9c-3891-4064-9841-3d87a573b6e7 00:11:33.904 06:50:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:33.904 06:50:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:33.904 06:50:38 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:33.904 06:50:38 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:33.904 06:50:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:33.904 06:50:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:33.904 06:50:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:33.904 06:50:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.904 06:50:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.904 06:50:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.904 06:50:38 -- paths/export.sh@5 -- # export PATH 00:11:33.904 06:50:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.904 06:50:38 -- nvmf/common.sh@46 -- # : 0 00:11:33.904 06:50:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:33.904 06:50:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:33.904 06:50:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:33.904 06:50:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:33.904 06:50:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:33.904 06:50:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:33.904 06:50:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:33.904 06:50:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:33.904 06:50:38 -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:33.904 06:50:38 -- target/tls.sh@71 -- # nvmftestinit 00:11:33.904 06:50:38 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:33.904 06:50:38 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:33.904 06:50:38 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:33.904 06:50:38 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:33.904 06:50:38 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:33.904 06:50:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:33.904 06:50:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:33.904 06:50:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:33.904 06:50:38 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:33.904 06:50:38 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:33.904 06:50:38 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:33.904 06:50:38 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:33.904 06:50:38 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:33.904 06:50:38 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:33.904 06:50:38 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:33.904 06:50:38 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:33.904 06:50:38 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:33.904 06:50:38 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:33.904 06:50:38 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:33.904 06:50:38 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:33.904 06:50:38 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:33.904 06:50:38 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:33.904 06:50:38 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:33.904 06:50:38 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:33.904 06:50:38 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:33.904 06:50:38 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:33.904 06:50:38 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:33.904 06:50:38 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:33.904 Cannot find device "nvmf_tgt_br" 00:11:33.904 06:50:38 -- nvmf/common.sh@154 -- # true 00:11:33.904 06:50:38 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:33.904 Cannot find device "nvmf_tgt_br2" 00:11:33.904 06:50:38 -- nvmf/common.sh@155 -- # true 00:11:33.904 06:50:38 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:33.904 06:50:38 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:33.904 Cannot find device "nvmf_tgt_br" 00:11:33.904 06:50:38 -- nvmf/common.sh@157 -- # true 00:11:33.904 06:50:38 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:33.904 Cannot find device "nvmf_tgt_br2" 00:11:33.904 06:50:38 -- nvmf/common.sh@158 -- # true 00:11:33.904 06:50:38 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:33.904 06:50:38 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:34.163 06:50:38 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:34.163 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:34.163 06:50:38 -- nvmf/common.sh@161 -- # true 00:11:34.163 06:50:38 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:34.163 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:34.163 06:50:38 -- nvmf/common.sh@162 -- # true 00:11:34.163 06:50:38 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:34.163 06:50:38 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:34.163 06:50:38 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:34.163 06:50:38 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:34.163 06:50:38 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:34.163 06:50:38 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:34.163 06:50:38 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:34.163 06:50:38 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:34.163 06:50:38 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:34.163 06:50:38 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:34.163 06:50:38 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:34.163 06:50:38 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:34.163 06:50:38 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:34.163 06:50:38 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:34.163 06:50:38 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:34.163 06:50:38 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:34.163 06:50:38 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:34.163 06:50:38 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:34.163 06:50:38 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:34.163 06:50:38 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:34.163 06:50:38 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:34.163 06:50:38 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:34.163 06:50:38 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:34.163 06:50:38 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:34.163 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:34.163 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:11:34.163 00:11:34.163 --- 10.0.0.2 ping statistics --- 00:11:34.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:34.163 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:11:34.163 06:50:38 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:34.163 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:34.163 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:11:34.163 00:11:34.163 --- 10.0.0.3 ping statistics --- 00:11:34.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:34.163 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:11:34.163 06:50:38 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:34.163 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:34.163 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:11:34.163 00:11:34.163 --- 10.0.0.1 ping statistics --- 00:11:34.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:34.163 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:11:34.163 06:50:38 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:34.163 06:50:38 -- nvmf/common.sh@421 -- # return 0 00:11:34.163 06:50:38 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:34.163 06:50:38 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:34.163 06:50:38 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:34.163 06:50:38 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:34.163 06:50:38 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:34.163 06:50:38 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:34.163 06:50:38 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:34.163 06:50:38 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:11:34.163 06:50:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:34.163 06:50:38 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:34.163 06:50:38 -- common/autotest_common.sh@10 -- # set +x 00:11:34.163 06:50:38 -- nvmf/common.sh@469 -- # nvmfpid=76487 00:11:34.163 06:50:38 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:11:34.163 06:50:38 -- nvmf/common.sh@470 -- # waitforlisten 76487 00:11:34.163 06:50:38 -- common/autotest_common.sh@829 -- # '[' -z 76487 ']' 00:11:34.163 06:50:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:34.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:34.163 06:50:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:34.163 06:50:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:34.163 06:50:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:34.163 06:50:38 -- common/autotest_common.sh@10 -- # set +x 00:11:34.422 [2024-12-13 06:50:38.726508] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:34.422 [2024-12-13 06:50:38.726598] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:34.422 [2024-12-13 06:50:38.872382] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:34.422 [2024-12-13 06:50:38.912661] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:34.422 [2024-12-13 06:50:38.912818] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:34.422 [2024-12-13 06:50:38.912833] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:34.422 [2024-12-13 06:50:38.912843] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:34.422 [2024-12-13 06:50:38.912875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:35.357 06:50:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:35.357 06:50:39 -- common/autotest_common.sh@862 -- # return 0 00:11:35.357 06:50:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:35.357 06:50:39 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:35.357 06:50:39 -- common/autotest_common.sh@10 -- # set +x 00:11:35.357 06:50:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:35.357 06:50:39 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:11:35.357 06:50:39 -- target/tls.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:11:35.616 true 00:11:35.616 06:50:39 -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:11:35.616 06:50:39 -- target/tls.sh@82 -- # jq -r .tls_version 00:11:35.875 06:50:40 -- target/tls.sh@82 -- # version=0 00:11:35.875 06:50:40 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:11:35.875 06:50:40 -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:11:36.134 06:50:40 -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:11:36.134 06:50:40 -- target/tls.sh@90 -- # jq -r .tls_version 00:11:36.393 06:50:40 -- target/tls.sh@90 -- # version=13 00:11:36.393 06:50:40 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:11:36.393 06:50:40 -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:11:36.651 06:50:41 -- target/tls.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:11:36.651 06:50:41 -- target/tls.sh@98 -- # jq -r .tls_version 00:11:36.910 06:50:41 -- target/tls.sh@98 -- # version=7 00:11:36.910 06:50:41 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:11:36.910 06:50:41 -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:11:36.910 06:50:41 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:11:37.170 06:50:41 -- target/tls.sh@105 -- # ktls=false 00:11:37.170 06:50:41 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:11:37.170 06:50:41 -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:11:37.429 06:50:41 -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:11:37.429 06:50:41 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:11:37.688 06:50:41 -- target/tls.sh@113 -- # ktls=true 00:11:37.688 06:50:41 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:11:37.688 06:50:41 -- target/tls.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:11:37.952 06:50:42 -- target/tls.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:11:37.952 06:50:42 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:11:38.217 06:50:42 -- target/tls.sh@121 -- # ktls=false 00:11:38.217 06:50:42 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:11:38.217 06:50:42 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:11:38.217 06:50:42 -- target/tls.sh@49 -- # local key hash crc 00:11:38.217 06:50:42 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:11:38.217 06:50:42 -- target/tls.sh@51 -- # hash=01 00:11:38.217 06:50:42 -- target/tls.sh@52 -- # gzip -1 -c 00:11:38.217 06:50:42 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:11:38.217 06:50:42 -- target/tls.sh@52 -- # tail -c8 00:11:38.217 06:50:42 -- target/tls.sh@52 -- # head -c 4 00:11:38.217 06:50:42 -- target/tls.sh@52 -- # crc='p$H�' 00:11:38.217 06:50:42 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:11:38.217 06:50:42 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:11:38.217 06:50:42 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:11:38.217 06:50:42 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:11:38.217 06:50:42 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:11:38.217 06:50:42 -- target/tls.sh@49 -- # local key hash crc 00:11:38.217 06:50:42 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:11:38.217 06:50:42 -- target/tls.sh@51 -- # hash=01 00:11:38.217 06:50:42 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:11:38.217 06:50:42 -- target/tls.sh@52 -- # gzip -1 -c 00:11:38.218 06:50:42 -- target/tls.sh@52 -- # tail -c8 00:11:38.218 06:50:42 -- target/tls.sh@52 -- # head -c 4 00:11:38.218 06:50:42 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:11:38.218 06:50:42 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:11:38.218 06:50:42 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:11:38.218 06:50:42 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:11:38.218 06:50:42 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:11:38.218 06:50:42 -- target/tls.sh@130 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:38.218 06:50:42 -- target/tls.sh@131 -- # key_2_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:11:38.218 06:50:42 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:11:38.218 06:50:42 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:11:38.218 06:50:42 -- target/tls.sh@136 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:38.218 06:50:42 -- target/tls.sh@137 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:11:38.218 06:50:42 -- target/tls.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:11:38.476 06:50:42 -- target/tls.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:11:38.735 06:50:43 -- target/tls.sh@142 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:38.735 06:50:43 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:38.735 06:50:43 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:11:38.994 [2024-12-13 06:50:43.375041] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:38.994 06:50:43 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:11:39.252 06:50:43 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:11:39.511 [2024-12-13 06:50:43.799126] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:11:39.511 [2024-12-13 06:50:43.799777] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:39.511 06:50:43 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:11:39.770 malloc0 00:11:39.770 06:50:44 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:39.770 06:50:44 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:40.029 06:50:44 -- target/tls.sh@146 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:52.231 Initializing NVMe Controllers 00:11:52.231 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:52.231 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:52.231 Initialization complete. Launching workers. 00:11:52.231 ======================================================== 00:11:52.231 Latency(us) 00:11:52.231 Device Information : IOPS MiB/s Average min max 00:11:52.231 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10721.55 41.88 5970.37 1426.75 10177.57 00:11:52.231 ======================================================== 00:11:52.231 Total : 10721.55 41.88 5970.37 1426.75 10177.57 00:11:52.231 00:11:52.231 06:50:54 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:52.231 06:50:54 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:11:52.231 06:50:54 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:11:52.231 06:50:54 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:11:52.231 06:50:54 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:11:52.231 06:50:54 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:52.231 06:50:54 -- target/tls.sh@28 -- # bdevperf_pid=76731 00:11:52.231 06:50:54 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:52.231 06:50:54 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:52.231 06:50:54 -- target/tls.sh@31 -- # waitforlisten 76731 /var/tmp/bdevperf.sock 00:11:52.231 06:50:54 -- common/autotest_common.sh@829 -- # '[' -z 76731 ']' 00:11:52.231 06:50:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:52.231 06:50:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:52.231 06:50:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:52.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:52.231 06:50:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:52.231 06:50:54 -- common/autotest_common.sh@10 -- # set +x 00:11:52.231 [2024-12-13 06:50:54.750702] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:52.231 [2024-12-13 06:50:54.750997] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76731 ] 00:11:52.231 [2024-12-13 06:50:54.895206] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:52.231 [2024-12-13 06:50:54.934473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:52.231 06:50:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:52.231 06:50:55 -- common/autotest_common.sh@862 -- # return 0 00:11:52.231 06:50:55 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:52.231 [2024-12-13 06:50:55.952483] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:52.231 TLSTESTn1 00:11:52.231 06:50:56 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:11:52.231 Running I/O for 10 seconds... 00:12:02.224 00:12:02.224 Latency(us) 00:12:02.224 [2024-12-13T06:51:06.743Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:02.224 [2024-12-13T06:51:06.743Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:12:02.224 Verification LBA range: start 0x0 length 0x2000 00:12:02.224 TLSTESTn1 : 10.01 5890.27 23.01 0.00 0.00 21695.41 4081.11 24307.90 00:12:02.224 [2024-12-13T06:51:06.743Z] =================================================================================================================== 00:12:02.224 [2024-12-13T06:51:06.743Z] Total : 5890.27 23.01 0.00 0.00 21695.41 4081.11 24307.90 00:12:02.224 0 00:12:02.224 06:51:06 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:02.224 06:51:06 -- target/tls.sh@45 -- # killprocess 76731 00:12:02.224 06:51:06 -- common/autotest_common.sh@936 -- # '[' -z 76731 ']' 00:12:02.224 06:51:06 -- common/autotest_common.sh@940 -- # kill -0 76731 00:12:02.224 06:51:06 -- common/autotest_common.sh@941 -- # uname 00:12:02.224 06:51:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:02.224 06:51:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76731 00:12:02.224 killing process with pid 76731 00:12:02.224 Received shutdown signal, test time was about 10.000000 seconds 00:12:02.224 00:12:02.224 Latency(us) 00:12:02.224 [2024-12-13T06:51:06.743Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:02.224 [2024-12-13T06:51:06.743Z] =================================================================================================================== 00:12:02.224 [2024-12-13T06:51:06.743Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:02.224 06:51:06 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:12:02.224 06:51:06 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:12:02.224 06:51:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76731' 00:12:02.224 06:51:06 -- common/autotest_common.sh@955 -- # kill 76731 00:12:02.224 06:51:06 -- common/autotest_common.sh@960 -- # wait 76731 00:12:02.224 06:51:06 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:12:02.224 06:51:06 -- common/autotest_common.sh@650 -- # local es=0 00:12:02.224 06:51:06 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:12:02.224 06:51:06 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:12:02.224 06:51:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:02.224 06:51:06 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:12:02.224 06:51:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:02.224 06:51:06 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:12:02.224 06:51:06 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:02.224 06:51:06 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:02.224 06:51:06 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:02.224 06:51:06 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt' 00:12:02.224 06:51:06 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:02.224 06:51:06 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:02.224 06:51:06 -- target/tls.sh@28 -- # bdevperf_pid=76866 00:12:02.224 06:51:06 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:02.224 06:51:06 -- target/tls.sh@31 -- # waitforlisten 76866 /var/tmp/bdevperf.sock 00:12:02.224 06:51:06 -- common/autotest_common.sh@829 -- # '[' -z 76866 ']' 00:12:02.224 06:51:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:02.224 06:51:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:02.224 06:51:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:02.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:02.224 06:51:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:02.224 06:51:06 -- common/autotest_common.sh@10 -- # set +x 00:12:02.224 [2024-12-13 06:51:06.409497] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:02.224 [2024-12-13 06:51:06.409733] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76866 ] 00:12:02.224 [2024-12-13 06:51:06.543317] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:02.224 [2024-12-13 06:51:06.578336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:03.161 06:51:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:03.161 06:51:07 -- common/autotest_common.sh@862 -- # return 0 00:12:03.161 06:51:07 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:12:03.420 [2024-12-13 06:51:07.692899] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:03.420 [2024-12-13 06:51:07.701464] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:12:03.420 [2024-12-13 06:51:07.701948] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2091f90 (107): Transport endpoint is not connected 00:12:03.420 [2024-12-13 06:51:07.702938] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2091f90 (9): Bad file descriptor 00:12:03.420 [2024-12-13 06:51:07.703935] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:12:03.420 [2024-12-13 06:51:07.704093] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:12:03.420 [2024-12-13 06:51:07.704109] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:12:03.420 request: 00:12:03.420 { 00:12:03.420 "name": "TLSTEST", 00:12:03.420 "trtype": "tcp", 00:12:03.420 "traddr": "10.0.0.2", 00:12:03.420 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:03.420 "adrfam": "ipv4", 00:12:03.420 "trsvcid": "4420", 00:12:03.420 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:03.420 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt", 00:12:03.420 "method": "bdev_nvme_attach_controller", 00:12:03.420 "req_id": 1 00:12:03.420 } 00:12:03.420 Got JSON-RPC error response 00:12:03.420 response: 00:12:03.420 { 00:12:03.420 "code": -32602, 00:12:03.420 "message": "Invalid parameters" 00:12:03.420 } 00:12:03.420 06:51:07 -- target/tls.sh@36 -- # killprocess 76866 00:12:03.420 06:51:07 -- common/autotest_common.sh@936 -- # '[' -z 76866 ']' 00:12:03.420 06:51:07 -- common/autotest_common.sh@940 -- # kill -0 76866 00:12:03.420 06:51:07 -- common/autotest_common.sh@941 -- # uname 00:12:03.420 06:51:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:03.420 06:51:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76866 00:12:03.420 killing process with pid 76866 00:12:03.420 Received shutdown signal, test time was about 10.000000 seconds 00:12:03.420 00:12:03.420 Latency(us) 00:12:03.420 [2024-12-13T06:51:07.939Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:03.420 [2024-12-13T06:51:07.939Z] =================================================================================================================== 00:12:03.420 [2024-12-13T06:51:07.939Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:03.420 06:51:07 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:12:03.420 06:51:07 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:12:03.420 06:51:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76866' 00:12:03.420 06:51:07 -- common/autotest_common.sh@955 -- # kill 76866 00:12:03.420 06:51:07 -- common/autotest_common.sh@960 -- # wait 76866 00:12:03.420 06:51:07 -- target/tls.sh@37 -- # return 1 00:12:03.420 06:51:07 -- common/autotest_common.sh@653 -- # es=1 00:12:03.420 06:51:07 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:03.420 06:51:07 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:03.420 06:51:07 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:03.420 06:51:07 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:03.420 06:51:07 -- common/autotest_common.sh@650 -- # local es=0 00:12:03.420 06:51:07 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:03.420 06:51:07 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:12:03.420 06:51:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:03.420 06:51:07 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:12:03.420 06:51:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:03.420 06:51:07 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:03.420 06:51:07 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:03.420 06:51:07 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:03.420 06:51:07 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:12:03.420 06:51:07 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:12:03.420 06:51:07 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:03.420 06:51:07 -- target/tls.sh@28 -- # bdevperf_pid=76894 00:12:03.420 06:51:07 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:03.421 06:51:07 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:03.421 06:51:07 -- target/tls.sh@31 -- # waitforlisten 76894 /var/tmp/bdevperf.sock 00:12:03.421 06:51:07 -- common/autotest_common.sh@829 -- # '[' -z 76894 ']' 00:12:03.421 06:51:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:03.421 06:51:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:03.421 06:51:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:03.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:03.421 06:51:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:03.421 06:51:07 -- common/autotest_common.sh@10 -- # set +x 00:12:03.680 [2024-12-13 06:51:07.949769] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:03.680 [2024-12-13 06:51:07.950084] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76894 ] 00:12:03.680 [2024-12-13 06:51:08.084638] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:03.680 [2024-12-13 06:51:08.122312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:04.617 06:51:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:04.617 06:51:08 -- common/autotest_common.sh@862 -- # return 0 00:12:04.617 06:51:08 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:04.876 [2024-12-13 06:51:09.221351] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:04.876 [2024-12-13 06:51:09.231549] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:12:04.876 [2024-12-13 06:51:09.231761] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:12:04.876 [2024-12-13 06:51:09.231997] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:12:04.876 [2024-12-13 06:51:09.232908] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x693f90 (107): Transport endpoint is not connected 00:12:04.876 [2024-12-13 06:51:09.233899] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x693f90 (9): Bad file descriptor 00:12:04.876 [2024-12-13 06:51:09.234897] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:12:04.876 [2024-12-13 06:51:09.234919] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:12:04.876 [2024-12-13 06:51:09.234944] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:12:04.876 request: 00:12:04.876 { 00:12:04.876 "name": "TLSTEST", 00:12:04.876 "trtype": "tcp", 00:12:04.876 "traddr": "10.0.0.2", 00:12:04.876 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:12:04.876 "adrfam": "ipv4", 00:12:04.876 "trsvcid": "4420", 00:12:04.876 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:04.876 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt", 00:12:04.876 "method": "bdev_nvme_attach_controller", 00:12:04.876 "req_id": 1 00:12:04.876 } 00:12:04.876 Got JSON-RPC error response 00:12:04.876 response: 00:12:04.876 { 00:12:04.876 "code": -32602, 00:12:04.876 "message": "Invalid parameters" 00:12:04.876 } 00:12:04.876 06:51:09 -- target/tls.sh@36 -- # killprocess 76894 00:12:04.876 06:51:09 -- common/autotest_common.sh@936 -- # '[' -z 76894 ']' 00:12:04.876 06:51:09 -- common/autotest_common.sh@940 -- # kill -0 76894 00:12:04.876 06:51:09 -- common/autotest_common.sh@941 -- # uname 00:12:04.876 06:51:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:04.876 06:51:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76894 00:12:04.876 killing process with pid 76894 00:12:04.876 Received shutdown signal, test time was about 10.000000 seconds 00:12:04.876 00:12:04.877 Latency(us) 00:12:04.877 [2024-12-13T06:51:09.396Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:04.877 [2024-12-13T06:51:09.396Z] =================================================================================================================== 00:12:04.877 [2024-12-13T06:51:09.396Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:04.877 06:51:09 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:12:04.877 06:51:09 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:12:04.877 06:51:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76894' 00:12:04.877 06:51:09 -- common/autotest_common.sh@955 -- # kill 76894 00:12:04.877 06:51:09 -- common/autotest_common.sh@960 -- # wait 76894 00:12:05.136 06:51:09 -- target/tls.sh@37 -- # return 1 00:12:05.136 06:51:09 -- common/autotest_common.sh@653 -- # es=1 00:12:05.136 06:51:09 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:05.136 06:51:09 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:05.136 06:51:09 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:05.136 06:51:09 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:05.136 06:51:09 -- common/autotest_common.sh@650 -- # local es=0 00:12:05.136 06:51:09 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:05.136 06:51:09 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:12:05.136 06:51:09 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:05.136 06:51:09 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:12:05.136 06:51:09 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:05.136 06:51:09 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:05.136 06:51:09 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:05.136 06:51:09 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:12:05.136 06:51:09 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:05.136 06:51:09 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:12:05.136 06:51:09 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:05.136 06:51:09 -- target/tls.sh@28 -- # bdevperf_pid=76921 00:12:05.136 06:51:09 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:05.137 06:51:09 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:05.137 06:51:09 -- target/tls.sh@31 -- # waitforlisten 76921 /var/tmp/bdevperf.sock 00:12:05.137 06:51:09 -- common/autotest_common.sh@829 -- # '[' -z 76921 ']' 00:12:05.137 06:51:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:05.137 06:51:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:05.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:05.137 06:51:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:05.137 06:51:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:05.137 06:51:09 -- common/autotest_common.sh@10 -- # set +x 00:12:05.137 [2024-12-13 06:51:09.502696] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:05.137 [2024-12-13 06:51:09.503528] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76921 ] 00:12:05.137 [2024-12-13 06:51:09.647736] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:05.396 [2024-12-13 06:51:09.683077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:05.963 06:51:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:05.963 06:51:10 -- common/autotest_common.sh@862 -- # return 0 00:12:05.963 06:51:10 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:06.222 [2024-12-13 06:51:10.681124] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:06.222 [2024-12-13 06:51:10.691757] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:12:06.222 [2024-12-13 06:51:10.691992] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:12:06.223 [2024-12-13 06:51:10.692054] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:12:06.223 [2024-12-13 06:51:10.692662] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe3ef90 (107): Transport endpoint is not connected 00:12:06.223 [2024-12-13 06:51:10.693651] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe3ef90 (9): Bad file descriptor 00:12:06.223 [2024-12-13 06:51:10.694649] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:12:06.223 [2024-12-13 06:51:10.694701] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:12:06.223 [2024-12-13 06:51:10.694726] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:12:06.223 request: 00:12:06.223 { 00:12:06.223 "name": "TLSTEST", 00:12:06.223 "trtype": "tcp", 00:12:06.223 "traddr": "10.0.0.2", 00:12:06.223 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:06.223 "adrfam": "ipv4", 00:12:06.223 "trsvcid": "4420", 00:12:06.223 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:12:06.223 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt", 00:12:06.223 "method": "bdev_nvme_attach_controller", 00:12:06.223 "req_id": 1 00:12:06.223 } 00:12:06.223 Got JSON-RPC error response 00:12:06.223 response: 00:12:06.223 { 00:12:06.223 "code": -32602, 00:12:06.223 "message": "Invalid parameters" 00:12:06.223 } 00:12:06.223 06:51:10 -- target/tls.sh@36 -- # killprocess 76921 00:12:06.223 06:51:10 -- common/autotest_common.sh@936 -- # '[' -z 76921 ']' 00:12:06.223 06:51:10 -- common/autotest_common.sh@940 -- # kill -0 76921 00:12:06.223 06:51:10 -- common/autotest_common.sh@941 -- # uname 00:12:06.223 06:51:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:06.223 06:51:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76921 00:12:06.482 06:51:10 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:12:06.482 06:51:10 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:12:06.482 06:51:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76921' 00:12:06.482 killing process with pid 76921 00:12:06.482 06:51:10 -- common/autotest_common.sh@955 -- # kill 76921 00:12:06.482 Received shutdown signal, test time was about 10.000000 seconds 00:12:06.482 00:12:06.482 Latency(us) 00:12:06.482 [2024-12-13T06:51:11.001Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:06.482 [2024-12-13T06:51:11.001Z] =================================================================================================================== 00:12:06.482 [2024-12-13T06:51:11.001Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:06.482 06:51:10 -- common/autotest_common.sh@960 -- # wait 76921 00:12:06.482 06:51:10 -- target/tls.sh@37 -- # return 1 00:12:06.482 06:51:10 -- common/autotest_common.sh@653 -- # es=1 00:12:06.482 06:51:10 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:06.482 06:51:10 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:06.482 06:51:10 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:06.482 06:51:10 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:12:06.482 06:51:10 -- common/autotest_common.sh@650 -- # local es=0 00:12:06.482 06:51:10 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:12:06.482 06:51:10 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:12:06.482 06:51:10 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:06.482 06:51:10 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:12:06.482 06:51:10 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:06.482 06:51:10 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:12:06.482 06:51:10 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:06.482 06:51:10 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:06.482 06:51:10 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:06.482 06:51:10 -- target/tls.sh@23 -- # psk= 00:12:06.482 06:51:10 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:06.482 06:51:10 -- target/tls.sh@28 -- # bdevperf_pid=76950 00:12:06.482 06:51:10 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:06.482 06:51:10 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:06.482 06:51:10 -- target/tls.sh@31 -- # waitforlisten 76950 /var/tmp/bdevperf.sock 00:12:06.482 06:51:10 -- common/autotest_common.sh@829 -- # '[' -z 76950 ']' 00:12:06.482 06:51:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:06.482 06:51:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:06.482 06:51:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:06.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:06.482 06:51:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:06.482 06:51:10 -- common/autotest_common.sh@10 -- # set +x 00:12:06.482 [2024-12-13 06:51:10.935689] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:06.482 [2024-12-13 06:51:10.935993] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76950 ] 00:12:06.741 [2024-12-13 06:51:11.076087] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:06.741 [2024-12-13 06:51:11.108761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:06.741 06:51:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:06.741 06:51:11 -- common/autotest_common.sh@862 -- # return 0 00:12:06.741 06:51:11 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:12:07.008 [2024-12-13 06:51:11.402056] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:12:07.008 [2024-12-13 06:51:11.404176] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba1c20 (9): Bad file descriptor 00:12:07.008 [2024-12-13 06:51:11.405171] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error request: 00:12:07.008 { 00:12:07.008 "name": "TLSTEST", 00:12:07.008 "trtype": "tcp", 00:12:07.008 "traddr": "10.0.0.2", 00:12:07.008 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:07.008 "adrfam": "ipv4", 00:12:07.008 "trsvcid": "4420", 00:12:07.008 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:07.008 "method": "bdev_nvme_attach_controller", 00:12:07.008 "req_id": 1 00:12:07.008 } 00:12:07.008 Got JSON-RPC error response 00:12:07.008 response: 00:12:07.008 { 00:12:07.008 "code": -32602, 00:12:07.008 "message": "Invalid parameters" 00:12:07.008 } 00:12:07.008 state 00:12:07.008 [2024-12-13 06:51:11.405346] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:12:07.008 [2024-12-13 06:51:11.405407] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:12:07.008 06:51:11 -- target/tls.sh@36 -- # killprocess 76950 00:12:07.008 06:51:11 -- common/autotest_common.sh@936 -- # '[' -z 76950 ']' 00:12:07.008 06:51:11 -- common/autotest_common.sh@940 -- # kill -0 76950 00:12:07.008 06:51:11 -- common/autotest_common.sh@941 -- # uname 00:12:07.008 06:51:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:07.008 06:51:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76950 00:12:07.008 06:51:11 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:12:07.009 06:51:11 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:12:07.009 06:51:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76950' 00:12:07.009 killing process with pid 76950 00:12:07.009 06:51:11 -- common/autotest_common.sh@955 -- # kill 76950 00:12:07.009 Received shutdown signal, test time was about 10.000000 seconds 00:12:07.009 00:12:07.009 Latency(us) 00:12:07.009 [2024-12-13T06:51:11.528Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:07.009 [2024-12-13T06:51:11.528Z] =================================================================================================================== 00:12:07.009 [2024-12-13T06:51:11.528Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:07.009 06:51:11 -- common/autotest_common.sh@960 -- # wait 76950 00:12:07.268 06:51:11 -- target/tls.sh@37 -- # return 1 00:12:07.268 06:51:11 -- common/autotest_common.sh@653 -- # es=1 00:12:07.268 06:51:11 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:07.268 06:51:11 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:07.268 06:51:11 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:07.268 06:51:11 -- target/tls.sh@167 -- # killprocess 76487 00:12:07.268 06:51:11 -- common/autotest_common.sh@936 -- # '[' -z 76487 ']' 00:12:07.268 06:51:11 -- common/autotest_common.sh@940 -- # kill -0 76487 00:12:07.268 06:51:11 -- common/autotest_common.sh@941 -- # uname 00:12:07.268 06:51:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:07.268 06:51:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76487 00:12:07.268 06:51:11 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:07.268 06:51:11 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:07.268 killing process with pid 76487 00:12:07.268 06:51:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76487' 00:12:07.268 06:51:11 -- common/autotest_common.sh@955 -- # kill 76487 00:12:07.268 06:51:11 -- common/autotest_common.sh@960 -- # wait 76487 00:12:07.268 06:51:11 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:12:07.268 06:51:11 -- target/tls.sh@49 -- # local key hash crc 00:12:07.268 06:51:11 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:12:07.268 06:51:11 -- target/tls.sh@51 -- # hash=02 00:12:07.268 06:51:11 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:12:07.268 06:51:11 -- target/tls.sh@52 -- # gzip -1 -c 00:12:07.268 06:51:11 -- target/tls.sh@52 -- # head -c 4 00:12:07.268 06:51:11 -- target/tls.sh@52 -- # tail -c8 00:12:07.268 06:51:11 -- target/tls.sh@52 -- # crc='�e�'\''' 00:12:07.268 06:51:11 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:12:07.268 06:51:11 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:12:07.268 06:51:11 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:12:07.268 06:51:11 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:12:07.527 06:51:11 -- target/tls.sh@169 -- # key_long_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:07.527 06:51:11 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:12:07.527 06:51:11 -- target/tls.sh@171 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:07.527 06:51:11 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:12:07.527 06:51:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:07.527 06:51:11 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:07.527 06:51:11 -- common/autotest_common.sh@10 -- # set +x 00:12:07.527 06:51:11 -- nvmf/common.sh@469 -- # nvmfpid=76985 00:12:07.527 06:51:11 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:07.527 06:51:11 -- nvmf/common.sh@470 -- # waitforlisten 76985 00:12:07.527 06:51:11 -- common/autotest_common.sh@829 -- # '[' -z 76985 ']' 00:12:07.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:07.527 06:51:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:07.527 06:51:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:07.527 06:51:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:07.527 06:51:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:07.527 06:51:11 -- common/autotest_common.sh@10 -- # set +x 00:12:07.527 [2024-12-13 06:51:11.852636] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:07.527 [2024-12-13 06:51:11.852911] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:07.527 [2024-12-13 06:51:11.993678] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:07.527 [2024-12-13 06:51:12.025464] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:07.527 [2024-12-13 06:51:12.025611] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:07.527 [2024-12-13 06:51:12.025624] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:07.527 [2024-12-13 06:51:12.025632] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:07.527 [2024-12-13 06:51:12.025654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:07.787 06:51:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:07.787 06:51:12 -- common/autotest_common.sh@862 -- # return 0 00:12:07.787 06:51:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:07.787 06:51:12 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:07.787 06:51:12 -- common/autotest_common.sh@10 -- # set +x 00:12:07.787 06:51:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:07.787 06:51:12 -- target/tls.sh@174 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:07.787 06:51:12 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:07.787 06:51:12 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:12:08.046 [2024-12-13 06:51:12.414647] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:08.046 06:51:12 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:12:08.305 06:51:12 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:12:08.563 [2024-12-13 06:51:12.942818] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:08.563 [2024-12-13 06:51:12.943034] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:08.563 06:51:12 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:12:08.822 malloc0 00:12:08.822 06:51:13 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:09.081 06:51:13 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:09.340 06:51:13 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:09.340 06:51:13 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:09.340 06:51:13 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:09.340 06:51:13 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:09.340 06:51:13 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:12:09.340 06:51:13 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:09.340 06:51:13 -- target/tls.sh@28 -- # bdevperf_pid=77027 00:12:09.340 06:51:13 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:09.340 06:51:13 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:09.340 06:51:13 -- target/tls.sh@31 -- # waitforlisten 77027 /var/tmp/bdevperf.sock 00:12:09.340 06:51:13 -- common/autotest_common.sh@829 -- # '[' -z 77027 ']' 00:12:09.340 06:51:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:09.340 06:51:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:09.340 06:51:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:09.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:09.340 06:51:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:09.340 06:51:13 -- common/autotest_common.sh@10 -- # set +x 00:12:09.340 [2024-12-13 06:51:13.687396] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:09.340 [2024-12-13 06:51:13.687686] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77027 ] 00:12:09.340 [2024-12-13 06:51:13.829689] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:09.599 [2024-12-13 06:51:13.863593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:09.599 06:51:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:09.599 06:51:13 -- common/autotest_common.sh@862 -- # return 0 00:12:09.599 06:51:13 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:09.857 [2024-12-13 06:51:14.204167] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:09.857 TLSTESTn1 00:12:09.857 06:51:14 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:12:10.115 Running I/O for 10 seconds... 00:12:20.084 00:12:20.084 Latency(us) 00:12:20.084 [2024-12-13T06:51:24.603Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:20.084 [2024-12-13T06:51:24.603Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:12:20.084 Verification LBA range: start 0x0 length 0x2000 00:12:20.084 TLSTESTn1 : 10.02 5618.30 21.95 0.00 0.00 22744.62 5451.40 21448.15 00:12:20.084 [2024-12-13T06:51:24.603Z] =================================================================================================================== 00:12:20.084 [2024-12-13T06:51:24.603Z] Total : 5618.30 21.95 0.00 0.00 22744.62 5451.40 21448.15 00:12:20.084 0 00:12:20.084 06:51:24 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:20.084 06:51:24 -- target/tls.sh@45 -- # killprocess 77027 00:12:20.084 06:51:24 -- common/autotest_common.sh@936 -- # '[' -z 77027 ']' 00:12:20.084 06:51:24 -- common/autotest_common.sh@940 -- # kill -0 77027 00:12:20.084 06:51:24 -- common/autotest_common.sh@941 -- # uname 00:12:20.084 06:51:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:20.084 06:51:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77027 00:12:20.084 killing process with pid 77027 00:12:20.084 Received shutdown signal, test time was about 10.000000 seconds 00:12:20.084 00:12:20.084 Latency(us) 00:12:20.084 [2024-12-13T06:51:24.603Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:20.084 [2024-12-13T06:51:24.603Z] =================================================================================================================== 00:12:20.084 [2024-12-13T06:51:24.603Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:20.084 06:51:24 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:12:20.084 06:51:24 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:12:20.084 06:51:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77027' 00:12:20.084 06:51:24 -- common/autotest_common.sh@955 -- # kill 77027 00:12:20.084 06:51:24 -- common/autotest_common.sh@960 -- # wait 77027 00:12:20.343 06:51:24 -- target/tls.sh@179 -- # chmod 0666 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:20.343 06:51:24 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:20.343 06:51:24 -- common/autotest_common.sh@650 -- # local es=0 00:12:20.343 06:51:24 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:20.343 06:51:24 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:12:20.343 06:51:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:20.343 06:51:24 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:12:20.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:20.343 06:51:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:20.343 06:51:24 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:20.343 06:51:24 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:20.343 06:51:24 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:20.343 06:51:24 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:20.343 06:51:24 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:12:20.343 06:51:24 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:20.343 06:51:24 -- target/tls.sh@28 -- # bdevperf_pid=77150 00:12:20.343 06:51:24 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:20.343 06:51:24 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:20.343 06:51:24 -- target/tls.sh@31 -- # waitforlisten 77150 /var/tmp/bdevperf.sock 00:12:20.343 06:51:24 -- common/autotest_common.sh@829 -- # '[' -z 77150 ']' 00:12:20.343 06:51:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:20.343 06:51:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:20.343 06:51:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:20.343 06:51:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:20.343 06:51:24 -- common/autotest_common.sh@10 -- # set +x 00:12:20.343 [2024-12-13 06:51:24.671935] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:20.343 [2024-12-13 06:51:24.672270] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77150 ] 00:12:20.343 [2024-12-13 06:51:24.814845] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:20.343 [2024-12-13 06:51:24.849171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:21.306 06:51:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:21.306 06:51:25 -- common/autotest_common.sh@862 -- # return 0 00:12:21.306 06:51:25 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:21.565 [2024-12-13 06:51:25.845977] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:21.565 [2024-12-13 06:51:25.846242] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:12:21.565 request: 00:12:21.565 { 00:12:21.565 "name": "TLSTEST", 00:12:21.565 "trtype": "tcp", 00:12:21.565 "traddr": "10.0.0.2", 00:12:21.565 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:21.565 "adrfam": "ipv4", 00:12:21.565 "trsvcid": "4420", 00:12:21.565 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:21.565 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:12:21.565 "method": "bdev_nvme_attach_controller", 00:12:21.565 "req_id": 1 00:12:21.565 } 00:12:21.565 Got JSON-RPC error response 00:12:21.565 response: 00:12:21.565 { 00:12:21.565 "code": -22, 00:12:21.565 "message": "Could not retrieve PSK from file: /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:12:21.565 } 00:12:21.565 06:51:25 -- target/tls.sh@36 -- # killprocess 77150 00:12:21.565 06:51:25 -- common/autotest_common.sh@936 -- # '[' -z 77150 ']' 00:12:21.565 06:51:25 -- common/autotest_common.sh@940 -- # kill -0 77150 00:12:21.565 06:51:25 -- common/autotest_common.sh@941 -- # uname 00:12:21.565 06:51:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:21.565 06:51:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77150 00:12:21.565 06:51:25 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:12:21.565 killing process with pid 77150 00:12:21.565 Received shutdown signal, test time was about 10.000000 seconds 00:12:21.565 00:12:21.565 Latency(us) 00:12:21.565 [2024-12-13T06:51:26.084Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:21.565 [2024-12-13T06:51:26.084Z] =================================================================================================================== 00:12:21.565 [2024-12-13T06:51:26.084Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:21.565 06:51:25 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:12:21.565 06:51:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77150' 00:12:21.565 06:51:25 -- common/autotest_common.sh@955 -- # kill 77150 00:12:21.565 06:51:25 -- common/autotest_common.sh@960 -- # wait 77150 00:12:21.565 06:51:26 -- target/tls.sh@37 -- # return 1 00:12:21.565 06:51:26 -- common/autotest_common.sh@653 -- # es=1 00:12:21.565 06:51:26 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:21.565 06:51:26 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:21.565 06:51:26 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:21.565 06:51:26 -- target/tls.sh@183 -- # killprocess 76985 00:12:21.565 06:51:26 -- common/autotest_common.sh@936 -- # '[' -z 76985 ']' 00:12:21.565 06:51:26 -- common/autotest_common.sh@940 -- # kill -0 76985 00:12:21.565 06:51:26 -- common/autotest_common.sh@941 -- # uname 00:12:21.565 06:51:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:21.565 06:51:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76985 00:12:21.565 killing process with pid 76985 00:12:21.565 06:51:26 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:21.565 06:51:26 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:21.565 06:51:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76985' 00:12:21.565 06:51:26 -- common/autotest_common.sh@955 -- # kill 76985 00:12:21.565 06:51:26 -- common/autotest_common.sh@960 -- # wait 76985 00:12:21.824 06:51:26 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:12:21.824 06:51:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:21.824 06:51:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:21.824 06:51:26 -- common/autotest_common.sh@10 -- # set +x 00:12:21.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:21.824 06:51:26 -- nvmf/common.sh@469 -- # nvmfpid=77188 00:12:21.824 06:51:26 -- nvmf/common.sh@470 -- # waitforlisten 77188 00:12:21.824 06:51:26 -- common/autotest_common.sh@829 -- # '[' -z 77188 ']' 00:12:21.824 06:51:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:21.824 06:51:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:21.824 06:51:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:21.824 06:51:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:21.824 06:51:26 -- common/autotest_common.sh@10 -- # set +x 00:12:21.824 06:51:26 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:21.824 [2024-12-13 06:51:26.260714] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:21.824 [2024-12-13 06:51:26.261613] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:22.083 [2024-12-13 06:51:26.399920] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:22.083 [2024-12-13 06:51:26.430133] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:22.083 [2024-12-13 06:51:26.430277] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:22.083 [2024-12-13 06:51:26.430290] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:22.083 [2024-12-13 06:51:26.430297] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:22.083 [2024-12-13 06:51:26.430320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:23.019 06:51:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:23.019 06:51:27 -- common/autotest_common.sh@862 -- # return 0 00:12:23.019 06:51:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:23.019 06:51:27 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:23.019 06:51:27 -- common/autotest_common.sh@10 -- # set +x 00:12:23.019 06:51:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:23.019 06:51:27 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:23.019 06:51:27 -- common/autotest_common.sh@650 -- # local es=0 00:12:23.019 06:51:27 -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:23.019 06:51:27 -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:12:23.019 06:51:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:23.019 06:51:27 -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:12:23.019 06:51:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:23.019 06:51:27 -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:23.019 06:51:27 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:23.019 06:51:27 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:12:23.019 [2024-12-13 06:51:27.499545] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:23.019 06:51:27 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:12:23.277 06:51:27 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:12:23.536 [2024-12-13 06:51:28.007660] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:23.536 [2024-12-13 06:51:28.007886] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:23.536 06:51:28 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:12:23.794 malloc0 00:12:23.794 06:51:28 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:24.360 06:51:28 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:24.360 [2024-12-13 06:51:28.805854] tcp.c:3551:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:12:24.360 [2024-12-13 06:51:28.805908] tcp.c:3620:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:12:24.360 [2024-12-13 06:51:28.805930] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:12:24.360 request: 00:12:24.360 { 00:12:24.360 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:24.360 "host": "nqn.2016-06.io.spdk:host1", 00:12:24.360 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:12:24.360 "method": "nvmf_subsystem_add_host", 00:12:24.360 "req_id": 1 00:12:24.360 } 00:12:24.360 Got JSON-RPC error response 00:12:24.360 response: 00:12:24.360 { 00:12:24.360 "code": -32603, 00:12:24.360 "message": "Internal error" 00:12:24.360 } 00:12:24.360 06:51:28 -- common/autotest_common.sh@653 -- # es=1 00:12:24.360 06:51:28 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:24.360 06:51:28 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:24.360 06:51:28 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:24.360 06:51:28 -- target/tls.sh@189 -- # killprocess 77188 00:12:24.360 06:51:28 -- common/autotest_common.sh@936 -- # '[' -z 77188 ']' 00:12:24.360 06:51:28 -- common/autotest_common.sh@940 -- # kill -0 77188 00:12:24.360 06:51:28 -- common/autotest_common.sh@941 -- # uname 00:12:24.360 06:51:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:24.360 06:51:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77188 00:12:24.360 killing process with pid 77188 00:12:24.360 06:51:28 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:24.360 06:51:28 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:24.360 06:51:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77188' 00:12:24.360 06:51:28 -- common/autotest_common.sh@955 -- # kill 77188 00:12:24.360 06:51:28 -- common/autotest_common.sh@960 -- # wait 77188 00:12:24.618 06:51:29 -- target/tls.sh@190 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:24.618 06:51:29 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:12:24.618 06:51:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:24.618 06:51:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:24.618 06:51:29 -- common/autotest_common.sh@10 -- # set +x 00:12:24.618 06:51:29 -- nvmf/common.sh@469 -- # nvmfpid=77251 00:12:24.618 06:51:29 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:24.618 06:51:29 -- nvmf/common.sh@470 -- # waitforlisten 77251 00:12:24.618 06:51:29 -- common/autotest_common.sh@829 -- # '[' -z 77251 ']' 00:12:24.618 06:51:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:24.618 06:51:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:24.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:24.618 06:51:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:24.618 06:51:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:24.618 06:51:29 -- common/autotest_common.sh@10 -- # set +x 00:12:24.618 [2024-12-13 06:51:29.056082] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:24.618 [2024-12-13 06:51:29.056169] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:24.877 [2024-12-13 06:51:29.193790] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:24.877 [2024-12-13 06:51:29.226611] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:24.877 [2024-12-13 06:51:29.226754] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:24.877 [2024-12-13 06:51:29.226769] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:24.877 [2024-12-13 06:51:29.226777] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:24.877 [2024-12-13 06:51:29.226802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:24.877 06:51:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:24.877 06:51:29 -- common/autotest_common.sh@862 -- # return 0 00:12:24.877 06:51:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:24.877 06:51:29 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:24.877 06:51:29 -- common/autotest_common.sh@10 -- # set +x 00:12:24.877 06:51:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:24.877 06:51:29 -- target/tls.sh@194 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:24.877 06:51:29 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:24.877 06:51:29 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:12:25.135 [2024-12-13 06:51:29.561403] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:25.135 06:51:29 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:12:25.393 06:51:29 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:12:25.651 [2024-12-13 06:51:30.049502] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:25.651 [2024-12-13 06:51:30.049730] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:25.651 06:51:30 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:12:25.919 malloc0 00:12:25.919 06:51:30 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:26.203 06:51:30 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:26.461 06:51:30 -- target/tls.sh@197 -- # bdevperf_pid=77292 00:12:26.461 06:51:30 -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:26.461 06:51:30 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:26.461 06:51:30 -- target/tls.sh@200 -- # waitforlisten 77292 /var/tmp/bdevperf.sock 00:12:26.461 06:51:30 -- common/autotest_common.sh@829 -- # '[' -z 77292 ']' 00:12:26.461 06:51:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:26.461 06:51:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:26.461 06:51:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:26.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:26.461 06:51:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:26.461 06:51:30 -- common/autotest_common.sh@10 -- # set +x 00:12:26.461 [2024-12-13 06:51:30.830146] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:26.461 [2024-12-13 06:51:30.830468] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77292 ] 00:12:26.461 [2024-12-13 06:51:30.968387] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:26.719 [2024-12-13 06:51:31.007214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:27.285 06:51:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:27.285 06:51:31 -- common/autotest_common.sh@862 -- # return 0 00:12:27.285 06:51:31 -- target/tls.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:27.544 [2024-12-13 06:51:32.000448] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:27.802 TLSTESTn1 00:12:27.802 06:51:32 -- target/tls.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:12:28.061 06:51:32 -- target/tls.sh@205 -- # tgtconf='{ 00:12:28.061 "subsystems": [ 00:12:28.061 { 00:12:28.061 "subsystem": "iobuf", 00:12:28.061 "config": [ 00:12:28.061 { 00:12:28.061 "method": "iobuf_set_options", 00:12:28.061 "params": { 00:12:28.061 "small_pool_count": 8192, 00:12:28.061 "large_pool_count": 1024, 00:12:28.061 "small_bufsize": 8192, 00:12:28.061 "large_bufsize": 135168 00:12:28.061 } 00:12:28.061 } 00:12:28.061 ] 00:12:28.061 }, 00:12:28.061 { 00:12:28.061 "subsystem": "sock", 00:12:28.061 "config": [ 00:12:28.061 { 00:12:28.061 "method": "sock_impl_set_options", 00:12:28.061 "params": { 00:12:28.061 "impl_name": "uring", 00:12:28.061 "recv_buf_size": 2097152, 00:12:28.061 "send_buf_size": 2097152, 00:12:28.061 "enable_recv_pipe": true, 00:12:28.061 "enable_quickack": false, 00:12:28.061 "enable_placement_id": 0, 00:12:28.061 "enable_zerocopy_send_server": false, 00:12:28.061 "enable_zerocopy_send_client": false, 00:12:28.061 "zerocopy_threshold": 0, 00:12:28.061 "tls_version": 0, 00:12:28.061 "enable_ktls": false 00:12:28.061 } 00:12:28.061 }, 00:12:28.061 { 00:12:28.061 "method": "sock_impl_set_options", 00:12:28.061 "params": { 00:12:28.061 "impl_name": "posix", 00:12:28.061 "recv_buf_size": 2097152, 00:12:28.061 "send_buf_size": 2097152, 00:12:28.061 "enable_recv_pipe": true, 00:12:28.061 "enable_quickack": false, 00:12:28.061 "enable_placement_id": 0, 00:12:28.061 "enable_zerocopy_send_server": true, 00:12:28.061 "enable_zerocopy_send_client": false, 00:12:28.061 "zerocopy_threshold": 0, 00:12:28.061 "tls_version": 0, 00:12:28.061 "enable_ktls": false 00:12:28.061 } 00:12:28.061 }, 00:12:28.061 { 00:12:28.061 "method": "sock_impl_set_options", 00:12:28.061 "params": { 00:12:28.061 "impl_name": "ssl", 00:12:28.061 "recv_buf_size": 4096, 00:12:28.061 "send_buf_size": 4096, 00:12:28.061 "enable_recv_pipe": true, 00:12:28.061 "enable_quickack": false, 00:12:28.061 "enable_placement_id": 0, 00:12:28.061 "enable_zerocopy_send_server": true, 00:12:28.061 "enable_zerocopy_send_client": false, 00:12:28.061 "zerocopy_threshold": 0, 00:12:28.061 "tls_version": 0, 00:12:28.061 "enable_ktls": false 00:12:28.061 } 00:12:28.061 } 00:12:28.061 ] 00:12:28.061 }, 00:12:28.061 { 00:12:28.061 "subsystem": "vmd", 00:12:28.061 "config": [] 00:12:28.061 }, 00:12:28.061 { 00:12:28.061 "subsystem": "accel", 00:12:28.061 "config": [ 00:12:28.061 { 00:12:28.061 "method": "accel_set_options", 00:12:28.061 "params": { 00:12:28.061 "small_cache_size": 128, 00:12:28.061 "large_cache_size": 16, 00:12:28.061 "task_count": 2048, 00:12:28.061 "sequence_count": 2048, 00:12:28.061 "buf_count": 2048 00:12:28.061 } 00:12:28.061 } 00:12:28.061 ] 00:12:28.061 }, 00:12:28.061 { 00:12:28.061 "subsystem": "bdev", 00:12:28.061 "config": [ 00:12:28.061 { 00:12:28.061 "method": "bdev_set_options", 00:12:28.061 "params": { 00:12:28.061 "bdev_io_pool_size": 65535, 00:12:28.061 "bdev_io_cache_size": 256, 00:12:28.061 "bdev_auto_examine": true, 00:12:28.061 "iobuf_small_cache_size": 128, 00:12:28.061 "iobuf_large_cache_size": 16 00:12:28.061 } 00:12:28.061 }, 00:12:28.061 { 00:12:28.061 "method": "bdev_raid_set_options", 00:12:28.061 "params": { 00:12:28.061 "process_window_size_kb": 1024 00:12:28.061 } 00:12:28.061 }, 00:12:28.061 { 00:12:28.061 "method": "bdev_iscsi_set_options", 00:12:28.061 "params": { 00:12:28.061 "timeout_sec": 30 00:12:28.061 } 00:12:28.061 }, 00:12:28.061 { 00:12:28.061 "method": "bdev_nvme_set_options", 00:12:28.061 "params": { 00:12:28.061 "action_on_timeout": "none", 00:12:28.061 "timeout_us": 0, 00:12:28.061 "timeout_admin_us": 0, 00:12:28.061 "keep_alive_timeout_ms": 10000, 00:12:28.061 "transport_retry_count": 4, 00:12:28.061 "arbitration_burst": 0, 00:12:28.061 "low_priority_weight": 0, 00:12:28.061 "medium_priority_weight": 0, 00:12:28.061 "high_priority_weight": 0, 00:12:28.061 "nvme_adminq_poll_period_us": 10000, 00:12:28.061 "nvme_ioq_poll_period_us": 0, 00:12:28.061 "io_queue_requests": 0, 00:12:28.061 "delay_cmd_submit": true, 00:12:28.061 "bdev_retry_count": 3, 00:12:28.061 "transport_ack_timeout": 0, 00:12:28.061 "ctrlr_loss_timeout_sec": 0, 00:12:28.061 "reconnect_delay_sec": 0, 00:12:28.061 "fast_io_fail_timeout_sec": 0, 00:12:28.061 "generate_uuids": false, 00:12:28.061 "transport_tos": 0, 00:12:28.061 "io_path_stat": false, 00:12:28.061 "allow_accel_sequence": false 00:12:28.061 } 00:12:28.061 }, 00:12:28.061 { 00:12:28.061 "method": "bdev_nvme_set_hotplug", 00:12:28.061 "params": { 00:12:28.061 "period_us": 100000, 00:12:28.061 "enable": false 00:12:28.061 } 00:12:28.061 }, 00:12:28.061 { 00:12:28.061 "method": "bdev_malloc_create", 00:12:28.061 "params": { 00:12:28.061 "name": "malloc0", 00:12:28.061 "num_blocks": 8192, 00:12:28.061 "block_size": 4096, 00:12:28.061 "physical_block_size": 4096, 00:12:28.061 "uuid": "6d16870b-f945-406e-b8f5-3192ecfe057b", 00:12:28.061 "optimal_io_boundary": 0 00:12:28.061 } 00:12:28.061 }, 00:12:28.061 { 00:12:28.062 "method": "bdev_wait_for_examine" 00:12:28.062 } 00:12:28.062 ] 00:12:28.062 }, 00:12:28.062 { 00:12:28.062 "subsystem": "nbd", 00:12:28.062 "config": [] 00:12:28.062 }, 00:12:28.062 { 00:12:28.062 "subsystem": "scheduler", 00:12:28.062 "config": [ 00:12:28.062 { 00:12:28.062 "method": "framework_set_scheduler", 00:12:28.062 "params": { 00:12:28.062 "name": "static" 00:12:28.062 } 00:12:28.062 } 00:12:28.062 ] 00:12:28.062 }, 00:12:28.062 { 00:12:28.062 "subsystem": "nvmf", 00:12:28.062 "config": [ 00:12:28.062 { 00:12:28.062 "method": "nvmf_set_config", 00:12:28.062 "params": { 00:12:28.062 "discovery_filter": "match_any", 00:12:28.062 "admin_cmd_passthru": { 00:12:28.062 "identify_ctrlr": false 00:12:28.062 } 00:12:28.062 } 00:12:28.062 }, 00:12:28.062 { 00:12:28.062 "method": "nvmf_set_max_subsystems", 00:12:28.062 "params": { 00:12:28.062 "max_subsystems": 1024 00:12:28.062 } 00:12:28.062 }, 00:12:28.062 { 00:12:28.062 "method": "nvmf_set_crdt", 00:12:28.062 "params": { 00:12:28.062 "crdt1": 0, 00:12:28.062 "crdt2": 0, 00:12:28.062 "crdt3": 0 00:12:28.062 } 00:12:28.062 }, 00:12:28.062 { 00:12:28.062 "method": "nvmf_create_transport", 00:12:28.062 "params": { 00:12:28.062 "trtype": "TCP", 00:12:28.062 "max_queue_depth": 128, 00:12:28.062 "max_io_qpairs_per_ctrlr": 127, 00:12:28.062 "in_capsule_data_size": 4096, 00:12:28.062 "max_io_size": 131072, 00:12:28.062 "io_unit_size": 131072, 00:12:28.062 "max_aq_depth": 128, 00:12:28.062 "num_shared_buffers": 511, 00:12:28.062 "buf_cache_size": 4294967295, 00:12:28.062 "dif_insert_or_strip": false, 00:12:28.062 "zcopy": false, 00:12:28.062 "c2h_success": false, 00:12:28.062 "sock_priority": 0, 00:12:28.062 "abort_timeout_sec": 1 00:12:28.062 } 00:12:28.062 }, 00:12:28.062 { 00:12:28.062 "method": "nvmf_create_subsystem", 00:12:28.062 "params": { 00:12:28.062 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:28.062 "allow_any_host": false, 00:12:28.062 "serial_number": "SPDK00000000000001", 00:12:28.062 "model_number": "SPDK bdev Controller", 00:12:28.062 "max_namespaces": 10, 00:12:28.062 "min_cntlid": 1, 00:12:28.062 "max_cntlid": 65519, 00:12:28.062 "ana_reporting": false 00:12:28.062 } 00:12:28.062 }, 00:12:28.062 { 00:12:28.062 "method": "nvmf_subsystem_add_host", 00:12:28.062 "params": { 00:12:28.062 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:28.062 "host": "nqn.2016-06.io.spdk:host1", 00:12:28.062 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:12:28.062 } 00:12:28.062 }, 00:12:28.062 { 00:12:28.062 "method": "nvmf_subsystem_add_ns", 00:12:28.062 "params": { 00:12:28.062 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:28.062 "namespace": { 00:12:28.062 "nsid": 1, 00:12:28.062 "bdev_name": "malloc0", 00:12:28.062 "nguid": "6D16870BF945406EB8F53192ECFE057B", 00:12:28.062 "uuid": "6d16870b-f945-406e-b8f5-3192ecfe057b" 00:12:28.062 } 00:12:28.062 } 00:12:28.062 }, 00:12:28.062 { 00:12:28.062 "method": "nvmf_subsystem_add_listener", 00:12:28.062 "params": { 00:12:28.062 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:28.062 "listen_address": { 00:12:28.062 "trtype": "TCP", 00:12:28.062 "adrfam": "IPv4", 00:12:28.062 "traddr": "10.0.0.2", 00:12:28.062 "trsvcid": "4420" 00:12:28.062 }, 00:12:28.062 "secure_channel": true 00:12:28.062 } 00:12:28.062 } 00:12:28.062 ] 00:12:28.062 } 00:12:28.062 ] 00:12:28.062 }' 00:12:28.062 06:51:32 -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:12:28.321 06:51:32 -- target/tls.sh@206 -- # bdevperfconf='{ 00:12:28.321 "subsystems": [ 00:12:28.321 { 00:12:28.321 "subsystem": "iobuf", 00:12:28.321 "config": [ 00:12:28.321 { 00:12:28.321 "method": "iobuf_set_options", 00:12:28.321 "params": { 00:12:28.321 "small_pool_count": 8192, 00:12:28.321 "large_pool_count": 1024, 00:12:28.321 "small_bufsize": 8192, 00:12:28.321 "large_bufsize": 135168 00:12:28.321 } 00:12:28.321 } 00:12:28.321 ] 00:12:28.321 }, 00:12:28.321 { 00:12:28.321 "subsystem": "sock", 00:12:28.321 "config": [ 00:12:28.321 { 00:12:28.321 "method": "sock_impl_set_options", 00:12:28.321 "params": { 00:12:28.321 "impl_name": "uring", 00:12:28.321 "recv_buf_size": 2097152, 00:12:28.321 "send_buf_size": 2097152, 00:12:28.321 "enable_recv_pipe": true, 00:12:28.321 "enable_quickack": false, 00:12:28.321 "enable_placement_id": 0, 00:12:28.321 "enable_zerocopy_send_server": false, 00:12:28.321 "enable_zerocopy_send_client": false, 00:12:28.321 "zerocopy_threshold": 0, 00:12:28.321 "tls_version": 0, 00:12:28.321 "enable_ktls": false 00:12:28.321 } 00:12:28.321 }, 00:12:28.321 { 00:12:28.321 "method": "sock_impl_set_options", 00:12:28.321 "params": { 00:12:28.321 "impl_name": "posix", 00:12:28.321 "recv_buf_size": 2097152, 00:12:28.321 "send_buf_size": 2097152, 00:12:28.321 "enable_recv_pipe": true, 00:12:28.321 "enable_quickack": false, 00:12:28.321 "enable_placement_id": 0, 00:12:28.321 "enable_zerocopy_send_server": true, 00:12:28.321 "enable_zerocopy_send_client": false, 00:12:28.321 "zerocopy_threshold": 0, 00:12:28.321 "tls_version": 0, 00:12:28.321 "enable_ktls": false 00:12:28.321 } 00:12:28.321 }, 00:12:28.321 { 00:12:28.321 "method": "sock_impl_set_options", 00:12:28.321 "params": { 00:12:28.321 "impl_name": "ssl", 00:12:28.321 "recv_buf_size": 4096, 00:12:28.321 "send_buf_size": 4096, 00:12:28.321 "enable_recv_pipe": true, 00:12:28.321 "enable_quickack": false, 00:12:28.321 "enable_placement_id": 0, 00:12:28.321 "enable_zerocopy_send_server": true, 00:12:28.321 "enable_zerocopy_send_client": false, 00:12:28.321 "zerocopy_threshold": 0, 00:12:28.321 "tls_version": 0, 00:12:28.321 "enable_ktls": false 00:12:28.321 } 00:12:28.321 } 00:12:28.321 ] 00:12:28.321 }, 00:12:28.321 { 00:12:28.321 "subsystem": "vmd", 00:12:28.321 "config": [] 00:12:28.321 }, 00:12:28.321 { 00:12:28.321 "subsystem": "accel", 00:12:28.321 "config": [ 00:12:28.321 { 00:12:28.321 "method": "accel_set_options", 00:12:28.321 "params": { 00:12:28.321 "small_cache_size": 128, 00:12:28.321 "large_cache_size": 16, 00:12:28.321 "task_count": 2048, 00:12:28.321 "sequence_count": 2048, 00:12:28.321 "buf_count": 2048 00:12:28.321 } 00:12:28.321 } 00:12:28.321 ] 00:12:28.321 }, 00:12:28.321 { 00:12:28.321 "subsystem": "bdev", 00:12:28.321 "config": [ 00:12:28.321 { 00:12:28.321 "method": "bdev_set_options", 00:12:28.321 "params": { 00:12:28.321 "bdev_io_pool_size": 65535, 00:12:28.321 "bdev_io_cache_size": 256, 00:12:28.321 "bdev_auto_examine": true, 00:12:28.321 "iobuf_small_cache_size": 128, 00:12:28.321 "iobuf_large_cache_size": 16 00:12:28.321 } 00:12:28.321 }, 00:12:28.321 { 00:12:28.321 "method": "bdev_raid_set_options", 00:12:28.321 "params": { 00:12:28.321 "process_window_size_kb": 1024 00:12:28.321 } 00:12:28.321 }, 00:12:28.321 { 00:12:28.321 "method": "bdev_iscsi_set_options", 00:12:28.321 "params": { 00:12:28.321 "timeout_sec": 30 00:12:28.321 } 00:12:28.321 }, 00:12:28.321 { 00:12:28.321 "method": "bdev_nvme_set_options", 00:12:28.321 "params": { 00:12:28.321 "action_on_timeout": "none", 00:12:28.321 "timeout_us": 0, 00:12:28.321 "timeout_admin_us": 0, 00:12:28.321 "keep_alive_timeout_ms": 10000, 00:12:28.321 "transport_retry_count": 4, 00:12:28.321 "arbitration_burst": 0, 00:12:28.321 "low_priority_weight": 0, 00:12:28.321 "medium_priority_weight": 0, 00:12:28.321 "high_priority_weight": 0, 00:12:28.321 "nvme_adminq_poll_period_us": 10000, 00:12:28.321 "nvme_ioq_poll_period_us": 0, 00:12:28.321 "io_queue_requests": 512, 00:12:28.321 "delay_cmd_submit": true, 00:12:28.321 "bdev_retry_count": 3, 00:12:28.321 "transport_ack_timeout": 0, 00:12:28.321 "ctrlr_loss_timeout_sec": 0, 00:12:28.321 "reconnect_delay_sec": 0, 00:12:28.321 "fast_io_fail_timeout_sec": 0, 00:12:28.321 "generate_uuids": false, 00:12:28.321 "transport_tos": 0, 00:12:28.321 "io_path_stat": false, 00:12:28.321 "allow_accel_sequence": false 00:12:28.321 } 00:12:28.321 }, 00:12:28.321 { 00:12:28.321 "method": "bdev_nvme_attach_controller", 00:12:28.321 "params": { 00:12:28.321 "name": "TLSTEST", 00:12:28.321 "trtype": "TCP", 00:12:28.321 "adrfam": "IPv4", 00:12:28.321 "traddr": "10.0.0.2", 00:12:28.321 "trsvcid": "4420", 00:12:28.321 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:28.321 "prchk_reftag": false, 00:12:28.321 "prchk_guard": false, 00:12:28.321 "ctrlr_loss_timeout_sec": 0, 00:12:28.321 "reconnect_delay_sec": 0, 00:12:28.321 "fast_io_fail_timeout_sec": 0, 00:12:28.321 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:12:28.321 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:28.321 "hdgst": false, 00:12:28.321 "ddgst": false 00:12:28.321 } 00:12:28.322 }, 00:12:28.322 { 00:12:28.322 "method": "bdev_nvme_set_hotplug", 00:12:28.322 "params": { 00:12:28.322 "period_us": 100000, 00:12:28.322 "enable": false 00:12:28.322 } 00:12:28.322 }, 00:12:28.322 { 00:12:28.322 "method": "bdev_wait_for_examine" 00:12:28.322 } 00:12:28.322 ] 00:12:28.322 }, 00:12:28.322 { 00:12:28.322 "subsystem": "nbd", 00:12:28.322 "config": [] 00:12:28.322 } 00:12:28.322 ] 00:12:28.322 }' 00:12:28.322 06:51:32 -- target/tls.sh@208 -- # killprocess 77292 00:12:28.322 06:51:32 -- common/autotest_common.sh@936 -- # '[' -z 77292 ']' 00:12:28.322 06:51:32 -- common/autotest_common.sh@940 -- # kill -0 77292 00:12:28.322 06:51:32 -- common/autotest_common.sh@941 -- # uname 00:12:28.322 06:51:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:28.322 06:51:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77292 00:12:28.322 killing process with pid 77292 00:12:28.322 Received shutdown signal, test time was about 10.000000 seconds 00:12:28.322 00:12:28.322 Latency(us) 00:12:28.322 [2024-12-13T06:51:32.841Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:28.322 [2024-12-13T06:51:32.841Z] =================================================================================================================== 00:12:28.322 [2024-12-13T06:51:32.841Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:28.322 06:51:32 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:12:28.322 06:51:32 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:12:28.322 06:51:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77292' 00:12:28.322 06:51:32 -- common/autotest_common.sh@955 -- # kill 77292 00:12:28.322 06:51:32 -- common/autotest_common.sh@960 -- # wait 77292 00:12:28.581 06:51:32 -- target/tls.sh@209 -- # killprocess 77251 00:12:28.581 06:51:32 -- common/autotest_common.sh@936 -- # '[' -z 77251 ']' 00:12:28.581 06:51:32 -- common/autotest_common.sh@940 -- # kill -0 77251 00:12:28.581 06:51:32 -- common/autotest_common.sh@941 -- # uname 00:12:28.581 06:51:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:28.581 06:51:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77251 00:12:28.581 killing process with pid 77251 00:12:28.581 06:51:32 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:28.581 06:51:32 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:28.581 06:51:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77251' 00:12:28.581 06:51:32 -- common/autotest_common.sh@955 -- # kill 77251 00:12:28.581 06:51:32 -- common/autotest_common.sh@960 -- # wait 77251 00:12:28.581 06:51:33 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:12:28.581 06:51:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:28.581 06:51:33 -- target/tls.sh@212 -- # echo '{ 00:12:28.581 "subsystems": [ 00:12:28.581 { 00:12:28.581 "subsystem": "iobuf", 00:12:28.581 "config": [ 00:12:28.581 { 00:12:28.581 "method": "iobuf_set_options", 00:12:28.581 "params": { 00:12:28.581 "small_pool_count": 8192, 00:12:28.581 "large_pool_count": 1024, 00:12:28.581 "small_bufsize": 8192, 00:12:28.581 "large_bufsize": 135168 00:12:28.581 } 00:12:28.581 } 00:12:28.581 ] 00:12:28.581 }, 00:12:28.581 { 00:12:28.581 "subsystem": "sock", 00:12:28.581 "config": [ 00:12:28.581 { 00:12:28.581 "method": "sock_impl_set_options", 00:12:28.581 "params": { 00:12:28.581 "impl_name": "uring", 00:12:28.581 "recv_buf_size": 2097152, 00:12:28.581 "send_buf_size": 2097152, 00:12:28.581 "enable_recv_pipe": true, 00:12:28.581 "enable_quickack": false, 00:12:28.581 "enable_placement_id": 0, 00:12:28.581 "enable_zerocopy_send_server": false, 00:12:28.581 "enable_zerocopy_send_client": false, 00:12:28.581 "zerocopy_threshold": 0, 00:12:28.581 "tls_version": 0, 00:12:28.581 "enable_ktls": false 00:12:28.581 } 00:12:28.581 }, 00:12:28.581 { 00:12:28.581 "method": "sock_impl_set_options", 00:12:28.581 "params": { 00:12:28.581 "impl_name": "posix", 00:12:28.581 "recv_buf_size": 2097152, 00:12:28.581 "send_buf_size": 2097152, 00:12:28.581 "enable_recv_pipe": true, 00:12:28.581 "enable_quickack": false, 00:12:28.581 "enable_placement_id": 0, 00:12:28.581 "enable_zerocopy_send_server": true, 00:12:28.581 "enable_zerocopy_send_client": false, 00:12:28.581 "zerocopy_threshold": 0, 00:12:28.581 "tls_version": 0, 00:12:28.581 "enable_ktls": false 00:12:28.581 } 00:12:28.581 }, 00:12:28.581 { 00:12:28.581 "method": "sock_impl_set_options", 00:12:28.581 "params": { 00:12:28.581 "impl_name": "ssl", 00:12:28.581 "recv_buf_size": 4096, 00:12:28.581 "send_buf_size": 4096, 00:12:28.581 "enable_recv_pipe": true, 00:12:28.581 "enable_quickack": false, 00:12:28.581 "enable_placement_id": 0, 00:12:28.581 "enable_zerocopy_send_server": true, 00:12:28.581 "enable_zerocopy_send_client": false, 00:12:28.581 "zerocopy_threshold": 0, 00:12:28.581 "tls_version": 0, 00:12:28.581 "enable_ktls": false 00:12:28.581 } 00:12:28.581 } 00:12:28.581 ] 00:12:28.581 }, 00:12:28.581 { 00:12:28.581 "subsystem": "vmd", 00:12:28.581 "config": [] 00:12:28.581 }, 00:12:28.581 { 00:12:28.581 "subsystem": "accel", 00:12:28.581 "config": [ 00:12:28.581 { 00:12:28.581 "method": "accel_set_options", 00:12:28.581 "params": { 00:12:28.581 "small_cache_size": 128, 00:12:28.581 "large_cache_size": 16, 00:12:28.581 "task_count": 2048, 00:12:28.581 "sequence_count": 2048, 00:12:28.581 "buf_count": 2048 00:12:28.581 } 00:12:28.581 } 00:12:28.581 ] 00:12:28.581 }, 00:12:28.581 { 00:12:28.581 "subsystem": "bdev", 00:12:28.581 "config": [ 00:12:28.581 { 00:12:28.581 "method": "bdev_set_options", 00:12:28.581 "params": { 00:12:28.581 "bdev_io_pool_size": 65535, 00:12:28.581 "bdev_io_cache_size": 256, 00:12:28.581 "bdev_auto_examine": true, 00:12:28.581 "iobuf_small_cache_size": 128, 00:12:28.581 "iobuf_large_cache_size": 16 00:12:28.581 } 00:12:28.581 }, 00:12:28.581 { 00:12:28.581 "method": "bdev_raid_set_options", 00:12:28.581 "params": { 00:12:28.581 "process_window_size_kb": 1024 00:12:28.581 } 00:12:28.581 }, 00:12:28.581 { 00:12:28.581 "method": "bdev_iscsi_set_options", 00:12:28.581 "params": { 00:12:28.581 "timeout_sec": 30 00:12:28.581 } 00:12:28.581 }, 00:12:28.581 { 00:12:28.581 "method": "bdev_nvme_set_options", 00:12:28.581 "params": { 00:12:28.581 "action_on_timeout": "none", 00:12:28.581 "timeout_us": 0, 00:12:28.581 "timeout_admin_us": 0, 00:12:28.581 "keep_alive_timeout_ms": 10000, 00:12:28.581 "transport_retry_count": 4, 00:12:28.581 "arbitration_burst": 0, 00:12:28.581 "low_priority_weight": 0, 00:12:28.581 "medium_priority_weight": 0, 00:12:28.581 "high_priority_weight": 0, 00:12:28.581 "nvme_adminq_poll_period_us": 10000, 00:12:28.581 "nvme_ioq_poll_period_us": 0, 00:12:28.581 "io_queue_requests": 0, 00:12:28.581 "delay_cmd_submit": true, 00:12:28.581 "bdev_retry_count": 3, 00:12:28.581 "transport_ack_timeout": 0, 00:12:28.581 "ctrlr_loss_timeout_sec": 0, 00:12:28.581 "reconnect_delay_sec": 0, 00:12:28.581 "fast_io_fail_timeout_sec": 0, 00:12:28.581 "generate_uuids": false, 00:12:28.581 "transport_tos": 0, 00:12:28.581 "io_path_stat": false, 00:12:28.581 "allow_accel_sequence": false 00:12:28.581 } 00:12:28.581 }, 00:12:28.581 { 00:12:28.581 "method": "bdev_nvme_set_hotplug", 00:12:28.581 "params": { 00:12:28.581 "period_us": 100000, 00:12:28.581 "enable": false 00:12:28.581 } 00:12:28.581 }, 00:12:28.581 { 00:12:28.581 "method": "bdev_malloc_create", 00:12:28.581 "params": { 00:12:28.581 "name": "malloc0", 00:12:28.581 "num_blocks": 8192, 00:12:28.581 "block_size": 4096, 00:12:28.581 "physical_block_size": 4096, 00:12:28.581 "uuid": "6d16870b-f945-406e-b8f5-3192ecfe057b", 00:12:28.581 "optimal_io_boundary": 0 00:12:28.581 } 00:12:28.581 }, 00:12:28.581 { 00:12:28.581 "method": "bdev_wait_for_examine" 00:12:28.581 } 00:12:28.581 ] 00:12:28.581 }, 00:12:28.581 { 00:12:28.581 "subsystem": "nbd", 00:12:28.581 "config": [] 00:12:28.581 }, 00:12:28.581 { 00:12:28.581 "subsystem": "scheduler", 00:12:28.581 "config": [ 00:12:28.581 { 00:12:28.581 "method": "framework_set_scheduler", 00:12:28.581 "params": { 00:12:28.581 "name": "static" 00:12:28.581 } 00:12:28.581 } 00:12:28.581 ] 00:12:28.581 }, 00:12:28.581 { 00:12:28.581 "subsystem": "nvmf", 00:12:28.581 "config": [ 00:12:28.581 { 00:12:28.581 "method": "nvmf_set_config", 00:12:28.581 "params": { 00:12:28.581 "discovery_filter": "match_any", 00:12:28.581 "admin_cmd_passthru": { 00:12:28.581 "identify_ctrlr": false 00:12:28.581 } 00:12:28.581 } 00:12:28.581 }, 00:12:28.581 { 00:12:28.581 "method": "nvmf_set_max_subsystems", 00:12:28.581 "params": { 00:12:28.581 "max_subsystems": 1024 00:12:28.581 } 00:12:28.581 }, 00:12:28.581 { 00:12:28.581 "method": "nvmf_set_crdt", 00:12:28.581 "params": { 00:12:28.581 "crdt1": 0, 00:12:28.581 "crdt2": 0, 00:12:28.581 "crdt3": 0 00:12:28.581 } 00:12:28.582 }, 00:12:28.582 { 00:12:28.582 "method": "nvmf_create_transport", 00:12:28.582 "params": { 00:12:28.582 "trtype": "TCP", 00:12:28.582 "max_queue_depth": 128, 00:12:28.582 "max_io_qpairs_per_ctrlr": 127, 00:12:28.582 "in_capsule_data_size": 4096, 00:12:28.582 "max_io_size": 131072, 00:12:28.582 "io_unit_size": 131072, 00:12:28.582 "max_aq_depth": 128, 00:12:28.582 "num_shared_buffers": 511, 00:12:28.582 "buf_cache_size": 4294967295, 00:12:28.582 "dif_insert_or_strip": false, 00:12:28.582 "zcopy": false, 00:12:28.582 "c2h_success": false, 00:12:28.582 "sock_priority": 0, 00:12:28.582 "abort_timeout_sec": 1 00:12:28.582 } 00:12:28.582 }, 00:12:28.582 { 00:12:28.582 "method": "nvmf_create_subsystem", 00:12:28.582 "params": { 00:12:28.582 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:28.582 "allow_any_host": false, 00:12:28.582 "serial_number": "SPDK00000000000001", 00:12:28.582 "model_number": "SPDK bdev Controller", 00:12:28.582 "max_namespaces": 10, 00:12:28.582 "min_cntlid": 1, 00:12:28.582 "max_cntlid": 65519, 00:12:28.582 "ana_reporting": false 00:12:28.582 } 00:12:28.582 }, 00:12:28.582 { 00:12:28.582 "method": "nvmf_subsystem_add_host", 00:12:28.582 "params": { 00:12:28.582 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:28.582 "host": "nqn.2016-06.io.spdk:host1", 00:12:28.582 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:12:28.582 } 00:12:28.582 }, 00:12:28.582 { 00:12:28.582 "method": "nvmf_subsystem_add_ns", 00:12:28.582 "params": { 00:12:28.582 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:28.582 "namespace": { 00:12:28.582 "nsid": 1, 00:12:28.582 "bdev_name": "malloc0", 00:12:28.582 "nguid": "6D16870BF945406EB8F53192ECFE057B", 00:12:28.582 "uuid": "6d16870b-f945-406e-b8f5-3192ecfe057b" 00:12:28.582 } 00:12:28.582 } 00:12:28.582 }, 00:12:28.582 { 00:12:28.582 "method": "nvmf_subsystem_add_listener", 00:12:28.582 "params": { 00:12:28.582 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:28.582 "listen_address": { 00:12:28.582 "trtype": "TCP", 00:12:28.582 "adrfam": "IPv4", 00:12:28.582 "traddr": "10.0.0.2", 00:12:28.582 "trsvcid": "4420" 00:12:28.582 }, 00:12:28.582 "secure_channel": true 00:12:28.582 } 00:12:28.582 } 00:12:28.582 ] 00:12:28.582 } 00:12:28.582 ] 00:12:28.582 }' 00:12:28.582 06:51:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:28.582 06:51:33 -- common/autotest_common.sh@10 -- # set +x 00:12:28.582 06:51:33 -- nvmf/common.sh@469 -- # nvmfpid=77341 00:12:28.582 06:51:33 -- nvmf/common.sh@470 -- # waitforlisten 77341 00:12:28.582 06:51:33 -- common/autotest_common.sh@829 -- # '[' -z 77341 ']' 00:12:28.582 06:51:33 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:12:28.582 06:51:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:28.582 06:51:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:28.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:28.582 06:51:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:28.582 06:51:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:28.582 06:51:33 -- common/autotest_common.sh@10 -- # set +x 00:12:28.840 [2024-12-13 06:51:33.116810] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:28.840 [2024-12-13 06:51:33.117129] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:28.840 [2024-12-13 06:51:33.257671] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:28.840 [2024-12-13 06:51:33.289990] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:28.840 [2024-12-13 06:51:33.290142] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:28.840 [2024-12-13 06:51:33.290156] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:28.840 [2024-12-13 06:51:33.290165] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:28.840 [2024-12-13 06:51:33.290191] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:29.099 [2024-12-13 06:51:33.467794] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:29.099 [2024-12-13 06:51:33.499742] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:29.099 [2024-12-13 06:51:33.499951] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:29.666 06:51:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:29.666 06:51:34 -- common/autotest_common.sh@862 -- # return 0 00:12:29.666 06:51:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:29.666 06:51:34 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:29.666 06:51:34 -- common/autotest_common.sh@10 -- # set +x 00:12:29.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:29.666 06:51:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:29.666 06:51:34 -- target/tls.sh@216 -- # bdevperf_pid=77373 00:12:29.666 06:51:34 -- target/tls.sh@217 -- # waitforlisten 77373 /var/tmp/bdevperf.sock 00:12:29.666 06:51:34 -- common/autotest_common.sh@829 -- # '[' -z 77373 ']' 00:12:29.666 06:51:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:29.666 06:51:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:29.666 06:51:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:29.666 06:51:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:29.666 06:51:34 -- common/autotest_common.sh@10 -- # set +x 00:12:29.666 06:51:34 -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:12:29.666 06:51:34 -- target/tls.sh@213 -- # echo '{ 00:12:29.666 "subsystems": [ 00:12:29.666 { 00:12:29.666 "subsystem": "iobuf", 00:12:29.666 "config": [ 00:12:29.666 { 00:12:29.666 "method": "iobuf_set_options", 00:12:29.666 "params": { 00:12:29.666 "small_pool_count": 8192, 00:12:29.666 "large_pool_count": 1024, 00:12:29.666 "small_bufsize": 8192, 00:12:29.666 "large_bufsize": 135168 00:12:29.666 } 00:12:29.666 } 00:12:29.666 ] 00:12:29.666 }, 00:12:29.666 { 00:12:29.666 "subsystem": "sock", 00:12:29.666 "config": [ 00:12:29.666 { 00:12:29.666 "method": "sock_impl_set_options", 00:12:29.666 "params": { 00:12:29.666 "impl_name": "uring", 00:12:29.666 "recv_buf_size": 2097152, 00:12:29.666 "send_buf_size": 2097152, 00:12:29.666 "enable_recv_pipe": true, 00:12:29.666 "enable_quickack": false, 00:12:29.666 "enable_placement_id": 0, 00:12:29.666 "enable_zerocopy_send_server": false, 00:12:29.666 "enable_zerocopy_send_client": false, 00:12:29.666 "zerocopy_threshold": 0, 00:12:29.666 "tls_version": 0, 00:12:29.666 "enable_ktls": false 00:12:29.666 } 00:12:29.666 }, 00:12:29.666 { 00:12:29.666 "method": "sock_impl_set_options", 00:12:29.666 "params": { 00:12:29.666 "impl_name": "posix", 00:12:29.666 "recv_buf_size": 2097152, 00:12:29.666 "send_buf_size": 2097152, 00:12:29.666 "enable_recv_pipe": true, 00:12:29.666 "enable_quickack": false, 00:12:29.666 "enable_placement_id": 0, 00:12:29.666 "enable_zerocopy_send_server": true, 00:12:29.666 "enable_zerocopy_send_client": false, 00:12:29.666 "zerocopy_threshold": 0, 00:12:29.666 "tls_version": 0, 00:12:29.666 "enable_ktls": false 00:12:29.666 } 00:12:29.666 }, 00:12:29.666 { 00:12:29.666 "method": "sock_impl_set_options", 00:12:29.666 "params": { 00:12:29.666 "impl_name": "ssl", 00:12:29.666 "recv_buf_size": 4096, 00:12:29.666 "send_buf_size": 4096, 00:12:29.666 "enable_recv_pipe": true, 00:12:29.666 "enable_quickack": false, 00:12:29.666 "enable_placement_id": 0, 00:12:29.666 "enable_zerocopy_send_server": true, 00:12:29.666 "enable_zerocopy_send_client": false, 00:12:29.666 "zerocopy_threshold": 0, 00:12:29.666 "tls_version": 0, 00:12:29.666 "enable_ktls": false 00:12:29.666 } 00:12:29.666 } 00:12:29.666 ] 00:12:29.666 }, 00:12:29.666 { 00:12:29.666 "subsystem": "vmd", 00:12:29.666 "config": [] 00:12:29.666 }, 00:12:29.666 { 00:12:29.666 "subsystem": "accel", 00:12:29.666 "config": [ 00:12:29.666 { 00:12:29.666 "method": "accel_set_options", 00:12:29.666 "params": { 00:12:29.666 "small_cache_size": 128, 00:12:29.666 "large_cache_size": 16, 00:12:29.666 "task_count": 2048, 00:12:29.666 "sequence_count": 2048, 00:12:29.666 "buf_count": 2048 00:12:29.666 } 00:12:29.666 } 00:12:29.666 ] 00:12:29.666 }, 00:12:29.666 { 00:12:29.666 "subsystem": "bdev", 00:12:29.666 "config": [ 00:12:29.666 { 00:12:29.666 "method": "bdev_set_options", 00:12:29.666 "params": { 00:12:29.666 "bdev_io_pool_size": 65535, 00:12:29.666 "bdev_io_cache_size": 256, 00:12:29.666 "bdev_auto_examine": true, 00:12:29.666 "iobuf_small_cache_size": 128, 00:12:29.666 "iobuf_large_cache_size": 16 00:12:29.666 } 00:12:29.666 }, 00:12:29.666 { 00:12:29.666 "method": "bdev_raid_set_options", 00:12:29.666 "params": { 00:12:29.666 "process_window_size_kb": 1024 00:12:29.666 } 00:12:29.666 }, 00:12:29.666 { 00:12:29.666 "method": "bdev_iscsi_set_options", 00:12:29.666 "params": { 00:12:29.666 "timeout_sec": 30 00:12:29.666 } 00:12:29.666 }, 00:12:29.666 { 00:12:29.666 "method": "bdev_nvme_set_options", 00:12:29.666 "params": { 00:12:29.666 "action_on_timeout": "none", 00:12:29.666 "timeout_us": 0, 00:12:29.666 "timeout_admin_us": 0, 00:12:29.666 "keep_alive_timeout_ms": 10000, 00:12:29.666 "transport_retry_count": 4, 00:12:29.666 "arbitration_burst": 0, 00:12:29.666 "low_priority_weight": 0, 00:12:29.666 "medium_priority_weight": 0, 00:12:29.666 "high_priority_weight": 0, 00:12:29.666 "nvme_adminq_poll_period_us": 10000, 00:12:29.666 "nvme_ioq_poll_period_us": 0, 00:12:29.666 "io_queue_requests": 512, 00:12:29.666 "delay_cmd_submit": true, 00:12:29.666 "bdev_retry_count": 3, 00:12:29.666 "transport_ack_timeout": 0, 00:12:29.666 "ctrlr_loss_timeout_sec": 0, 00:12:29.666 "reconnect_delay_sec": 0, 00:12:29.666 "fast_io_fail_timeout_sec": 0, 00:12:29.666 "generate_uuids": false, 00:12:29.666 "transport_tos": 0, 00:12:29.666 "io_path_stat": false, 00:12:29.666 "allow_accel_sequence": false 00:12:29.666 } 00:12:29.666 }, 00:12:29.666 { 00:12:29.666 "method": "bdev_nvme_attach_controller", 00:12:29.666 "params": { 00:12:29.666 "name": "TLSTEST", 00:12:29.666 "trtype": "TCP", 00:12:29.666 "adrfam": "IPv4", 00:12:29.666 "traddr": "10.0.0.2", 00:12:29.666 "trsvcid": "4420", 00:12:29.666 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:29.666 "prchk_reftag": false, 00:12:29.666 "prchk_guard": false, 00:12:29.666 "ctrlr_loss_timeout_sec": 0, 00:12:29.666 "reconnect_delay_sec": 0, 00:12:29.666 "fast_io_fail_timeout_sec": 0, 00:12:29.666 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:12:29.666 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:29.666 "hdgst": false, 00:12:29.666 "ddgst": false 00:12:29.666 } 00:12:29.666 }, 00:12:29.666 { 00:12:29.666 "method": "bdev_nvme_set_hotplug", 00:12:29.666 "params": { 00:12:29.666 "period_us": 100000, 00:12:29.666 "enable": false 00:12:29.666 } 00:12:29.666 }, 00:12:29.666 { 00:12:29.666 "method": "bdev_wait_for_examine" 00:12:29.666 } 00:12:29.666 ] 00:12:29.666 }, 00:12:29.666 { 00:12:29.666 "subsystem": "nbd", 00:12:29.666 "config": [] 00:12:29.666 } 00:12:29.666 ] 00:12:29.666 }' 00:12:29.925 [2024-12-13 06:51:34.184110] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:29.925 [2024-12-13 06:51:34.184210] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77373 ] 00:12:29.925 [2024-12-13 06:51:34.326818] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:29.925 [2024-12-13 06:51:34.365460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:30.183 [2024-12-13 06:51:34.491529] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:30.749 06:51:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:30.749 06:51:35 -- common/autotest_common.sh@862 -- # return 0 00:12:30.749 06:51:35 -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:12:30.749 Running I/O for 10 seconds... 00:12:40.729 00:12:40.729 Latency(us) 00:12:40.729 [2024-12-13T06:51:45.248Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:40.729 [2024-12-13T06:51:45.248Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:12:40.729 Verification LBA range: start 0x0 length 0x2000 00:12:40.730 TLSTESTn1 : 10.02 6101.26 23.83 0.00 0.00 20943.53 5034.36 21805.61 00:12:40.730 [2024-12-13T06:51:45.249Z] =================================================================================================================== 00:12:40.730 [2024-12-13T06:51:45.249Z] Total : 6101.26 23.83 0.00 0.00 20943.53 5034.36 21805.61 00:12:40.730 0 00:12:40.990 06:51:45 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:40.991 06:51:45 -- target/tls.sh@223 -- # killprocess 77373 00:12:40.991 06:51:45 -- common/autotest_common.sh@936 -- # '[' -z 77373 ']' 00:12:40.991 06:51:45 -- common/autotest_common.sh@940 -- # kill -0 77373 00:12:40.991 06:51:45 -- common/autotest_common.sh@941 -- # uname 00:12:40.991 06:51:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:40.991 06:51:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77373 00:12:40.991 killing process with pid 77373 00:12:40.991 Received shutdown signal, test time was about 10.000000 seconds 00:12:40.991 00:12:40.991 Latency(us) 00:12:40.991 [2024-12-13T06:51:45.510Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:40.991 [2024-12-13T06:51:45.510Z] =================================================================================================================== 00:12:40.991 [2024-12-13T06:51:45.510Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:40.991 06:51:45 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:12:40.991 06:51:45 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:12:40.991 06:51:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77373' 00:12:40.991 06:51:45 -- common/autotest_common.sh@955 -- # kill 77373 00:12:40.991 06:51:45 -- common/autotest_common.sh@960 -- # wait 77373 00:12:40.991 06:51:45 -- target/tls.sh@224 -- # killprocess 77341 00:12:40.991 06:51:45 -- common/autotest_common.sh@936 -- # '[' -z 77341 ']' 00:12:40.991 06:51:45 -- common/autotest_common.sh@940 -- # kill -0 77341 00:12:40.991 06:51:45 -- common/autotest_common.sh@941 -- # uname 00:12:40.991 06:51:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:40.991 06:51:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77341 00:12:40.991 killing process with pid 77341 00:12:40.991 06:51:45 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:40.991 06:51:45 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:40.991 06:51:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77341' 00:12:40.991 06:51:45 -- common/autotest_common.sh@955 -- # kill 77341 00:12:40.991 06:51:45 -- common/autotest_common.sh@960 -- # wait 77341 00:12:41.249 06:51:45 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:12:41.249 06:51:45 -- target/tls.sh@227 -- # cleanup 00:12:41.249 06:51:45 -- target/tls.sh@15 -- # process_shm --id 0 00:12:41.249 06:51:45 -- common/autotest_common.sh@806 -- # type=--id 00:12:41.249 06:51:45 -- common/autotest_common.sh@807 -- # id=0 00:12:41.249 06:51:45 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:12:41.249 06:51:45 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:12:41.249 06:51:45 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:12:41.249 06:51:45 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:12:41.249 06:51:45 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:12:41.249 06:51:45 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:12:41.249 nvmf_trace.0 00:12:41.249 06:51:45 -- common/autotest_common.sh@821 -- # return 0 00:12:41.249 Process with pid 77373 is not found 00:12:41.249 06:51:45 -- target/tls.sh@16 -- # killprocess 77373 00:12:41.249 06:51:45 -- common/autotest_common.sh@936 -- # '[' -z 77373 ']' 00:12:41.249 06:51:45 -- common/autotest_common.sh@940 -- # kill -0 77373 00:12:41.250 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (77373) - No such process 00:12:41.250 06:51:45 -- common/autotest_common.sh@963 -- # echo 'Process with pid 77373 is not found' 00:12:41.250 06:51:45 -- target/tls.sh@17 -- # nvmftestfini 00:12:41.250 06:51:45 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:41.250 06:51:45 -- nvmf/common.sh@116 -- # sync 00:12:41.250 06:51:45 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:41.250 06:51:45 -- nvmf/common.sh@119 -- # set +e 00:12:41.250 06:51:45 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:41.250 06:51:45 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:41.250 rmmod nvme_tcp 00:12:41.250 rmmod nvme_fabrics 00:12:41.250 rmmod nvme_keyring 00:12:41.250 06:51:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:41.250 06:51:45 -- nvmf/common.sh@123 -- # set -e 00:12:41.250 06:51:45 -- nvmf/common.sh@124 -- # return 0 00:12:41.250 06:51:45 -- nvmf/common.sh@477 -- # '[' -n 77341 ']' 00:12:41.509 06:51:45 -- nvmf/common.sh@478 -- # killprocess 77341 00:12:41.509 06:51:45 -- common/autotest_common.sh@936 -- # '[' -z 77341 ']' 00:12:41.509 06:51:45 -- common/autotest_common.sh@940 -- # kill -0 77341 00:12:41.509 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (77341) - No such process 00:12:41.509 Process with pid 77341 is not found 00:12:41.509 06:51:45 -- common/autotest_common.sh@963 -- # echo 'Process with pid 77341 is not found' 00:12:41.509 06:51:45 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:41.509 06:51:45 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:41.509 06:51:45 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:41.509 06:51:45 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:41.509 06:51:45 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:41.509 06:51:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:41.509 06:51:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:41.509 06:51:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:41.509 06:51:45 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:41.509 06:51:45 -- target/tls.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:41.509 00:12:41.509 real 1m7.713s 00:12:41.509 user 1m45.474s 00:12:41.509 sys 0m23.244s 00:12:41.509 06:51:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:41.509 06:51:45 -- common/autotest_common.sh@10 -- # set +x 00:12:41.509 ************************************ 00:12:41.509 END TEST nvmf_tls 00:12:41.509 ************************************ 00:12:41.509 06:51:45 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:12:41.509 06:51:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:41.509 06:51:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:41.509 06:51:45 -- common/autotest_common.sh@10 -- # set +x 00:12:41.509 ************************************ 00:12:41.509 START TEST nvmf_fips 00:12:41.509 ************************************ 00:12:41.509 06:51:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:12:41.509 * Looking for test storage... 00:12:41.509 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:12:41.509 06:51:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:41.509 06:51:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:41.509 06:51:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:41.509 06:51:46 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:41.509 06:51:46 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:41.509 06:51:46 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:41.509 06:51:46 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:41.509 06:51:46 -- scripts/common.sh@335 -- # IFS=.-: 00:12:41.509 06:51:46 -- scripts/common.sh@335 -- # read -ra ver1 00:12:41.509 06:51:46 -- scripts/common.sh@336 -- # IFS=.-: 00:12:41.509 06:51:46 -- scripts/common.sh@336 -- # read -ra ver2 00:12:41.509 06:51:46 -- scripts/common.sh@337 -- # local 'op=<' 00:12:41.509 06:51:46 -- scripts/common.sh@339 -- # ver1_l=2 00:12:41.509 06:51:46 -- scripts/common.sh@340 -- # ver2_l=1 00:12:41.509 06:51:46 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:41.509 06:51:46 -- scripts/common.sh@343 -- # case "$op" in 00:12:41.509 06:51:46 -- scripts/common.sh@344 -- # : 1 00:12:41.509 06:51:46 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:41.509 06:51:46 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:41.509 06:51:46 -- scripts/common.sh@364 -- # decimal 1 00:12:41.509 06:51:46 -- scripts/common.sh@352 -- # local d=1 00:12:41.509 06:51:46 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:41.509 06:51:46 -- scripts/common.sh@354 -- # echo 1 00:12:41.509 06:51:46 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:41.769 06:51:46 -- scripts/common.sh@365 -- # decimal 2 00:12:41.769 06:51:46 -- scripts/common.sh@352 -- # local d=2 00:12:41.769 06:51:46 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:41.769 06:51:46 -- scripts/common.sh@354 -- # echo 2 00:12:41.769 06:51:46 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:41.769 06:51:46 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:41.769 06:51:46 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:41.769 06:51:46 -- scripts/common.sh@367 -- # return 0 00:12:41.769 06:51:46 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:41.769 06:51:46 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:41.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.769 --rc genhtml_branch_coverage=1 00:12:41.769 --rc genhtml_function_coverage=1 00:12:41.769 --rc genhtml_legend=1 00:12:41.769 --rc geninfo_all_blocks=1 00:12:41.769 --rc geninfo_unexecuted_blocks=1 00:12:41.769 00:12:41.769 ' 00:12:41.769 06:51:46 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:41.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.769 --rc genhtml_branch_coverage=1 00:12:41.769 --rc genhtml_function_coverage=1 00:12:41.769 --rc genhtml_legend=1 00:12:41.769 --rc geninfo_all_blocks=1 00:12:41.769 --rc geninfo_unexecuted_blocks=1 00:12:41.769 00:12:41.769 ' 00:12:41.769 06:51:46 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:41.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.769 --rc genhtml_branch_coverage=1 00:12:41.769 --rc genhtml_function_coverage=1 00:12:41.769 --rc genhtml_legend=1 00:12:41.769 --rc geninfo_all_blocks=1 00:12:41.769 --rc geninfo_unexecuted_blocks=1 00:12:41.769 00:12:41.769 ' 00:12:41.769 06:51:46 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:41.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.769 --rc genhtml_branch_coverage=1 00:12:41.769 --rc genhtml_function_coverage=1 00:12:41.769 --rc genhtml_legend=1 00:12:41.769 --rc geninfo_all_blocks=1 00:12:41.769 --rc geninfo_unexecuted_blocks=1 00:12:41.769 00:12:41.769 ' 00:12:41.769 06:51:46 -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:41.769 06:51:46 -- nvmf/common.sh@7 -- # uname -s 00:12:41.769 06:51:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:41.769 06:51:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:41.769 06:51:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:41.769 06:51:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:41.769 06:51:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:41.769 06:51:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:41.769 06:51:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:41.769 06:51:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:41.769 06:51:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:41.769 06:51:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:41.769 06:51:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:657f0c9c-3891-4064-9841-3d87a573b6e7 00:12:41.769 06:51:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=657f0c9c-3891-4064-9841-3d87a573b6e7 00:12:41.769 06:51:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:41.769 06:51:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:41.769 06:51:46 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:41.769 06:51:46 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:41.769 06:51:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:41.769 06:51:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:41.769 06:51:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:41.769 06:51:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.769 06:51:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.769 06:51:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.769 06:51:46 -- paths/export.sh@5 -- # export PATH 00:12:41.769 06:51:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.769 06:51:46 -- nvmf/common.sh@46 -- # : 0 00:12:41.769 06:51:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:41.769 06:51:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:41.769 06:51:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:41.769 06:51:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:41.769 06:51:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:41.769 06:51:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:41.769 06:51:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:41.769 06:51:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:41.769 06:51:46 -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:41.769 06:51:46 -- fips/fips.sh@89 -- # check_openssl_version 00:12:41.769 06:51:46 -- fips/fips.sh@83 -- # local target=3.0.0 00:12:41.769 06:51:46 -- fips/fips.sh@85 -- # openssl version 00:12:41.769 06:51:46 -- fips/fips.sh@85 -- # awk '{print $2}' 00:12:41.769 06:51:46 -- fips/fips.sh@85 -- # ge 3.1.1 3.0.0 00:12:41.769 06:51:46 -- scripts/common.sh@375 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:12:41.769 06:51:46 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:41.769 06:51:46 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:41.769 06:51:46 -- scripts/common.sh@335 -- # IFS=.-: 00:12:41.769 06:51:46 -- scripts/common.sh@335 -- # read -ra ver1 00:12:41.769 06:51:46 -- scripts/common.sh@336 -- # IFS=.-: 00:12:41.769 06:51:46 -- scripts/common.sh@336 -- # read -ra ver2 00:12:41.769 06:51:46 -- scripts/common.sh@337 -- # local 'op=>=' 00:12:41.769 06:51:46 -- scripts/common.sh@339 -- # ver1_l=3 00:12:41.769 06:51:46 -- scripts/common.sh@340 -- # ver2_l=3 00:12:41.769 06:51:46 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:41.769 06:51:46 -- scripts/common.sh@343 -- # case "$op" in 00:12:41.769 06:51:46 -- scripts/common.sh@347 -- # : 1 00:12:41.769 06:51:46 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:41.769 06:51:46 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:41.769 06:51:46 -- scripts/common.sh@364 -- # decimal 3 00:12:41.769 06:51:46 -- scripts/common.sh@352 -- # local d=3 00:12:41.769 06:51:46 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:12:41.769 06:51:46 -- scripts/common.sh@354 -- # echo 3 00:12:41.769 06:51:46 -- scripts/common.sh@364 -- # ver1[v]=3 00:12:41.769 06:51:46 -- scripts/common.sh@365 -- # decimal 3 00:12:41.769 06:51:46 -- scripts/common.sh@352 -- # local d=3 00:12:41.769 06:51:46 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:12:41.769 06:51:46 -- scripts/common.sh@354 -- # echo 3 00:12:41.769 06:51:46 -- scripts/common.sh@365 -- # ver2[v]=3 00:12:41.769 06:51:46 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:41.769 06:51:46 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:41.769 06:51:46 -- scripts/common.sh@363 -- # (( v++ )) 00:12:41.769 06:51:46 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:41.769 06:51:46 -- scripts/common.sh@364 -- # decimal 1 00:12:41.769 06:51:46 -- scripts/common.sh@352 -- # local d=1 00:12:41.769 06:51:46 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:41.769 06:51:46 -- scripts/common.sh@354 -- # echo 1 00:12:41.769 06:51:46 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:41.769 06:51:46 -- scripts/common.sh@365 -- # decimal 0 00:12:41.769 06:51:46 -- scripts/common.sh@352 -- # local d=0 00:12:41.769 06:51:46 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:12:41.769 06:51:46 -- scripts/common.sh@354 -- # echo 0 00:12:41.769 06:51:46 -- scripts/common.sh@365 -- # ver2[v]=0 00:12:41.769 06:51:46 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:41.769 06:51:46 -- scripts/common.sh@366 -- # return 0 00:12:41.769 06:51:46 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:12:41.770 06:51:46 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:12:41.770 06:51:46 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:12:41.770 06:51:46 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:12:41.770 06:51:46 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:12:41.770 06:51:46 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:12:41.770 06:51:46 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:12:41.770 06:51:46 -- fips/fips.sh@113 -- # build_openssl_config 00:12:41.770 06:51:46 -- fips/fips.sh@37 -- # cat 00:12:41.770 06:51:46 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:12:41.770 06:51:46 -- fips/fips.sh@58 -- # cat - 00:12:41.770 06:51:46 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:12:41.770 06:51:46 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:12:41.770 06:51:46 -- fips/fips.sh@116 -- # mapfile -t providers 00:12:41.770 06:51:46 -- fips/fips.sh@116 -- # openssl list -providers 00:12:41.770 06:51:46 -- fips/fips.sh@116 -- # grep name 00:12:41.770 06:51:46 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:12:41.770 06:51:46 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:12:41.770 06:51:46 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:12:41.770 06:51:46 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:12:41.770 06:51:46 -- fips/fips.sh@127 -- # : 00:12:41.770 06:51:46 -- common/autotest_common.sh@650 -- # local es=0 00:12:41.770 06:51:46 -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:12:41.770 06:51:46 -- common/autotest_common.sh@638 -- # local arg=openssl 00:12:41.770 06:51:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:41.770 06:51:46 -- common/autotest_common.sh@642 -- # type -t openssl 00:12:41.770 06:51:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:41.770 06:51:46 -- common/autotest_common.sh@644 -- # type -P openssl 00:12:41.770 06:51:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:41.770 06:51:46 -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:12:41.770 06:51:46 -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:12:41.770 06:51:46 -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:12:41.770 Error setting digest 00:12:41.770 40320307007F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:12:41.770 40320307007F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:12:41.770 06:51:46 -- common/autotest_common.sh@653 -- # es=1 00:12:41.770 06:51:46 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:41.770 06:51:46 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:41.770 06:51:46 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:41.770 06:51:46 -- fips/fips.sh@130 -- # nvmftestinit 00:12:41.770 06:51:46 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:41.770 06:51:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:41.770 06:51:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:41.770 06:51:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:41.770 06:51:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:41.770 06:51:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:41.770 06:51:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:41.770 06:51:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:41.770 06:51:46 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:41.770 06:51:46 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:41.770 06:51:46 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:41.770 06:51:46 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:41.770 06:51:46 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:41.770 06:51:46 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:41.770 06:51:46 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:41.770 06:51:46 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:41.770 06:51:46 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:41.770 06:51:46 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:41.770 06:51:46 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:41.770 06:51:46 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:41.770 06:51:46 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:41.770 06:51:46 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:41.770 06:51:46 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:41.770 06:51:46 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:41.770 06:51:46 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:41.770 06:51:46 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:41.770 06:51:46 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:41.770 06:51:46 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:41.770 Cannot find device "nvmf_tgt_br" 00:12:41.770 06:51:46 -- nvmf/common.sh@154 -- # true 00:12:41.770 06:51:46 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:41.770 Cannot find device "nvmf_tgt_br2" 00:12:41.770 06:51:46 -- nvmf/common.sh@155 -- # true 00:12:41.770 06:51:46 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:41.770 06:51:46 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:42.029 Cannot find device "nvmf_tgt_br" 00:12:42.029 06:51:46 -- nvmf/common.sh@157 -- # true 00:12:42.029 06:51:46 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:42.029 Cannot find device "nvmf_tgt_br2" 00:12:42.029 06:51:46 -- nvmf/common.sh@158 -- # true 00:12:42.029 06:51:46 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:42.029 06:51:46 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:42.029 06:51:46 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:42.029 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:42.029 06:51:46 -- nvmf/common.sh@161 -- # true 00:12:42.029 06:51:46 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:42.029 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:42.029 06:51:46 -- nvmf/common.sh@162 -- # true 00:12:42.029 06:51:46 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:42.029 06:51:46 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:42.029 06:51:46 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:42.029 06:51:46 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:42.029 06:51:46 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:42.029 06:51:46 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:42.029 06:51:46 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:42.029 06:51:46 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:42.029 06:51:46 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:42.029 06:51:46 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:42.029 06:51:46 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:42.029 06:51:46 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:42.029 06:51:46 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:42.029 06:51:46 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:42.029 06:51:46 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:42.029 06:51:46 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:42.029 06:51:46 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:42.029 06:51:46 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:42.029 06:51:46 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:42.029 06:51:46 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:42.029 06:51:46 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:42.029 06:51:46 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:42.029 06:51:46 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:42.029 06:51:46 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:42.029 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:42.029 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:12:42.029 00:12:42.029 --- 10.0.0.2 ping statistics --- 00:12:42.029 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:42.029 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:12:42.029 06:51:46 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:42.029 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:42.029 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:12:42.029 00:12:42.029 --- 10.0.0.3 ping statistics --- 00:12:42.029 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:42.029 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:12:42.029 06:51:46 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:42.029 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:42.029 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:12:42.029 00:12:42.029 --- 10.0.0.1 ping statistics --- 00:12:42.029 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:42.029 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:12:42.029 06:51:46 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:42.029 06:51:46 -- nvmf/common.sh@421 -- # return 0 00:12:42.029 06:51:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:42.029 06:51:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:42.029 06:51:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:42.029 06:51:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:42.029 06:51:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:42.029 06:51:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:42.029 06:51:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:42.288 06:51:46 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:12:42.288 06:51:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:42.288 06:51:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:42.288 06:51:46 -- common/autotest_common.sh@10 -- # set +x 00:12:42.288 06:51:46 -- nvmf/common.sh@469 -- # nvmfpid=77723 00:12:42.288 06:51:46 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:42.288 06:51:46 -- nvmf/common.sh@470 -- # waitforlisten 77723 00:12:42.288 06:51:46 -- common/autotest_common.sh@829 -- # '[' -z 77723 ']' 00:12:42.288 06:51:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:42.288 06:51:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:42.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:42.288 06:51:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:42.288 06:51:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:42.288 06:51:46 -- common/autotest_common.sh@10 -- # set +x 00:12:42.288 [2024-12-13 06:51:46.641081] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:42.289 [2024-12-13 06:51:46.641474] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:42.289 [2024-12-13 06:51:46.784686] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:42.548 [2024-12-13 06:51:46.823788] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:42.548 [2024-12-13 06:51:46.824283] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:42.548 [2024-12-13 06:51:46.824505] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:42.548 [2024-12-13 06:51:46.824534] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:42.548 [2024-12-13 06:51:46.824571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:43.116 06:51:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:43.116 06:51:47 -- common/autotest_common.sh@862 -- # return 0 00:12:43.116 06:51:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:43.116 06:51:47 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:43.116 06:51:47 -- common/autotest_common.sh@10 -- # set +x 00:12:43.116 06:51:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:43.116 06:51:47 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:12:43.116 06:51:47 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:12:43.116 06:51:47 -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:12:43.116 06:51:47 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:12:43.116 06:51:47 -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:12:43.116 06:51:47 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:12:43.116 06:51:47 -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:12:43.116 06:51:47 -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:43.374 [2024-12-13 06:51:47.879776] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:43.633 [2024-12-13 06:51:47.895738] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:43.633 [2024-12-13 06:51:47.896026] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:43.633 malloc0 00:12:43.633 06:51:47 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:43.633 06:51:47 -- fips/fips.sh@147 -- # bdevperf_pid=77762 00:12:43.633 06:51:47 -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:43.633 06:51:47 -- fips/fips.sh@148 -- # waitforlisten 77762 /var/tmp/bdevperf.sock 00:12:43.633 06:51:47 -- common/autotest_common.sh@829 -- # '[' -z 77762 ']' 00:12:43.633 06:51:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:43.633 06:51:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:43.633 06:51:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:43.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:43.633 06:51:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:43.633 06:51:47 -- common/autotest_common.sh@10 -- # set +x 00:12:43.633 [2024-12-13 06:51:48.026545] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:43.633 [2024-12-13 06:51:48.026655] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77762 ] 00:12:43.893 [2024-12-13 06:51:48.165753] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:43.893 [2024-12-13 06:51:48.201893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:44.830 06:51:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:44.830 06:51:49 -- common/autotest_common.sh@862 -- # return 0 00:12:44.830 06:51:49 -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:12:44.830 [2024-12-13 06:51:49.264567] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:44.830 TLSTESTn1 00:12:45.089 06:51:49 -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:45.089 Running I/O for 10 seconds... 00:12:55.110 00:12:55.110 Latency(us) 00:12:55.110 [2024-12-13T06:51:59.629Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:55.110 [2024-12-13T06:51:59.630Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:12:55.111 Verification LBA range: start 0x0 length 0x2000 00:12:55.111 TLSTESTn1 : 10.01 6301.38 24.61 0.00 0.00 20281.56 4408.79 27048.49 00:12:55.111 [2024-12-13T06:51:59.630Z] =================================================================================================================== 00:12:55.111 [2024-12-13T06:51:59.630Z] Total : 6301.38 24.61 0.00 0.00 20281.56 4408.79 27048.49 00:12:55.111 0 00:12:55.111 06:51:59 -- fips/fips.sh@1 -- # cleanup 00:12:55.111 06:51:59 -- fips/fips.sh@15 -- # process_shm --id 0 00:12:55.111 06:51:59 -- common/autotest_common.sh@806 -- # type=--id 00:12:55.111 06:51:59 -- common/autotest_common.sh@807 -- # id=0 00:12:55.111 06:51:59 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:12:55.111 06:51:59 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:12:55.111 06:51:59 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:12:55.111 06:51:59 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:12:55.111 06:51:59 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:12:55.111 06:51:59 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:12:55.111 nvmf_trace.0 00:12:55.111 06:51:59 -- common/autotest_common.sh@821 -- # return 0 00:12:55.111 06:51:59 -- fips/fips.sh@16 -- # killprocess 77762 00:12:55.111 06:51:59 -- common/autotest_common.sh@936 -- # '[' -z 77762 ']' 00:12:55.111 06:51:59 -- common/autotest_common.sh@940 -- # kill -0 77762 00:12:55.111 06:51:59 -- common/autotest_common.sh@941 -- # uname 00:12:55.111 06:51:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:55.111 06:51:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77762 00:12:55.111 killing process with pid 77762 00:12:55.111 Received shutdown signal, test time was about 10.000000 seconds 00:12:55.111 00:12:55.111 Latency(us) 00:12:55.111 [2024-12-13T06:51:59.630Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:55.111 [2024-12-13T06:51:59.630Z] =================================================================================================================== 00:12:55.111 [2024-12-13T06:51:59.630Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:55.111 06:51:59 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:12:55.111 06:51:59 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:12:55.111 06:51:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77762' 00:12:55.111 06:51:59 -- common/autotest_common.sh@955 -- # kill 77762 00:12:55.111 06:51:59 -- common/autotest_common.sh@960 -- # wait 77762 00:12:55.370 06:51:59 -- fips/fips.sh@17 -- # nvmftestfini 00:12:55.370 06:51:59 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:55.370 06:51:59 -- nvmf/common.sh@116 -- # sync 00:12:55.370 06:51:59 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:55.370 06:51:59 -- nvmf/common.sh@119 -- # set +e 00:12:55.370 06:51:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:55.370 06:51:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:55.370 rmmod nvme_tcp 00:12:55.370 rmmod nvme_fabrics 00:12:55.370 rmmod nvme_keyring 00:12:55.370 06:51:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:55.370 06:51:59 -- nvmf/common.sh@123 -- # set -e 00:12:55.370 06:51:59 -- nvmf/common.sh@124 -- # return 0 00:12:55.370 06:51:59 -- nvmf/common.sh@477 -- # '[' -n 77723 ']' 00:12:55.370 06:51:59 -- nvmf/common.sh@478 -- # killprocess 77723 00:12:55.370 06:51:59 -- common/autotest_common.sh@936 -- # '[' -z 77723 ']' 00:12:55.370 06:51:59 -- common/autotest_common.sh@940 -- # kill -0 77723 00:12:55.370 06:51:59 -- common/autotest_common.sh@941 -- # uname 00:12:55.370 06:51:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:55.370 06:51:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77723 00:12:55.370 killing process with pid 77723 00:12:55.370 06:51:59 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:55.370 06:51:59 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:55.370 06:51:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77723' 00:12:55.370 06:51:59 -- common/autotest_common.sh@955 -- # kill 77723 00:12:55.370 06:51:59 -- common/autotest_common.sh@960 -- # wait 77723 00:12:55.629 06:51:59 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:55.629 06:51:59 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:55.630 06:51:59 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:55.630 06:51:59 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:55.630 06:51:59 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:55.630 06:51:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:55.630 06:51:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:55.630 06:51:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:55.630 06:52:00 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:55.630 06:52:00 -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:12:55.630 ************************************ 00:12:55.630 END TEST nvmf_fips 00:12:55.630 ************************************ 00:12:55.630 00:12:55.630 real 0m14.179s 00:12:55.630 user 0m19.321s 00:12:55.630 sys 0m5.735s 00:12:55.630 06:52:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:55.630 06:52:00 -- common/autotest_common.sh@10 -- # set +x 00:12:55.630 06:52:00 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:12:55.630 06:52:00 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:12:55.630 06:52:00 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:55.630 06:52:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:55.630 06:52:00 -- common/autotest_common.sh@10 -- # set +x 00:12:55.630 ************************************ 00:12:55.630 START TEST nvmf_fuzz 00:12:55.630 ************************************ 00:12:55.630 06:52:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:12:55.889 * Looking for test storage... 00:12:55.889 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:55.889 06:52:00 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:55.889 06:52:00 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:55.889 06:52:00 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:55.889 06:52:00 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:55.889 06:52:00 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:55.889 06:52:00 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:55.889 06:52:00 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:55.889 06:52:00 -- scripts/common.sh@335 -- # IFS=.-: 00:12:55.889 06:52:00 -- scripts/common.sh@335 -- # read -ra ver1 00:12:55.889 06:52:00 -- scripts/common.sh@336 -- # IFS=.-: 00:12:55.889 06:52:00 -- scripts/common.sh@336 -- # read -ra ver2 00:12:55.889 06:52:00 -- scripts/common.sh@337 -- # local 'op=<' 00:12:55.889 06:52:00 -- scripts/common.sh@339 -- # ver1_l=2 00:12:55.889 06:52:00 -- scripts/common.sh@340 -- # ver2_l=1 00:12:55.889 06:52:00 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:55.889 06:52:00 -- scripts/common.sh@343 -- # case "$op" in 00:12:55.889 06:52:00 -- scripts/common.sh@344 -- # : 1 00:12:55.889 06:52:00 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:55.889 06:52:00 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:55.889 06:52:00 -- scripts/common.sh@364 -- # decimal 1 00:12:55.889 06:52:00 -- scripts/common.sh@352 -- # local d=1 00:12:55.889 06:52:00 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:55.889 06:52:00 -- scripts/common.sh@354 -- # echo 1 00:12:55.889 06:52:00 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:55.889 06:52:00 -- scripts/common.sh@365 -- # decimal 2 00:12:55.889 06:52:00 -- scripts/common.sh@352 -- # local d=2 00:12:55.889 06:52:00 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:55.889 06:52:00 -- scripts/common.sh@354 -- # echo 2 00:12:55.889 06:52:00 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:55.889 06:52:00 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:55.889 06:52:00 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:55.889 06:52:00 -- scripts/common.sh@367 -- # return 0 00:12:55.889 06:52:00 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:55.889 06:52:00 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:55.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.889 --rc genhtml_branch_coverage=1 00:12:55.889 --rc genhtml_function_coverage=1 00:12:55.889 --rc genhtml_legend=1 00:12:55.889 --rc geninfo_all_blocks=1 00:12:55.889 --rc geninfo_unexecuted_blocks=1 00:12:55.889 00:12:55.889 ' 00:12:55.889 06:52:00 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:55.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.889 --rc genhtml_branch_coverage=1 00:12:55.889 --rc genhtml_function_coverage=1 00:12:55.889 --rc genhtml_legend=1 00:12:55.889 --rc geninfo_all_blocks=1 00:12:55.889 --rc geninfo_unexecuted_blocks=1 00:12:55.889 00:12:55.889 ' 00:12:55.889 06:52:00 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:55.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.889 --rc genhtml_branch_coverage=1 00:12:55.889 --rc genhtml_function_coverage=1 00:12:55.889 --rc genhtml_legend=1 00:12:55.889 --rc geninfo_all_blocks=1 00:12:55.889 --rc geninfo_unexecuted_blocks=1 00:12:55.889 00:12:55.889 ' 00:12:55.889 06:52:00 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:55.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.889 --rc genhtml_branch_coverage=1 00:12:55.889 --rc genhtml_function_coverage=1 00:12:55.889 --rc genhtml_legend=1 00:12:55.889 --rc geninfo_all_blocks=1 00:12:55.889 --rc geninfo_unexecuted_blocks=1 00:12:55.889 00:12:55.889 ' 00:12:55.889 06:52:00 -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:55.889 06:52:00 -- nvmf/common.sh@7 -- # uname -s 00:12:55.890 06:52:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:55.890 06:52:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:55.890 06:52:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:55.890 06:52:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:55.890 06:52:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:55.890 06:52:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:55.890 06:52:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:55.890 06:52:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:55.890 06:52:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:55.890 06:52:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:55.890 06:52:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:657f0c9c-3891-4064-9841-3d87a573b6e7 00:12:55.890 06:52:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=657f0c9c-3891-4064-9841-3d87a573b6e7 00:12:55.890 06:52:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:55.890 06:52:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:55.890 06:52:00 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:55.890 06:52:00 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:55.890 06:52:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:55.890 06:52:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:55.890 06:52:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:55.890 06:52:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.890 06:52:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.890 06:52:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.890 06:52:00 -- paths/export.sh@5 -- # export PATH 00:12:55.890 06:52:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.890 06:52:00 -- nvmf/common.sh@46 -- # : 0 00:12:55.890 06:52:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:55.890 06:52:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:55.890 06:52:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:55.890 06:52:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:55.890 06:52:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:55.890 06:52:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:55.890 06:52:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:55.890 06:52:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:55.890 06:52:00 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:12:55.890 06:52:00 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:55.890 06:52:00 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:55.890 06:52:00 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:55.890 06:52:00 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:55.890 06:52:00 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:55.890 06:52:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:55.890 06:52:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:55.890 06:52:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:55.890 06:52:00 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:55.890 06:52:00 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:55.890 06:52:00 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:55.890 06:52:00 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:55.890 06:52:00 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:55.890 06:52:00 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:55.890 06:52:00 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:55.890 06:52:00 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:55.890 06:52:00 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:55.890 06:52:00 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:55.890 06:52:00 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:55.890 06:52:00 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:55.890 06:52:00 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:55.890 06:52:00 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:55.890 06:52:00 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:55.890 06:52:00 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:55.890 06:52:00 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:55.890 06:52:00 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:55.890 06:52:00 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:55.890 06:52:00 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:55.890 Cannot find device "nvmf_tgt_br" 00:12:55.890 06:52:00 -- nvmf/common.sh@154 -- # true 00:12:55.890 06:52:00 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:55.890 Cannot find device "nvmf_tgt_br2" 00:12:55.890 06:52:00 -- nvmf/common.sh@155 -- # true 00:12:55.890 06:52:00 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:55.890 06:52:00 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:55.890 Cannot find device "nvmf_tgt_br" 00:12:55.890 06:52:00 -- nvmf/common.sh@157 -- # true 00:12:55.890 06:52:00 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:55.890 Cannot find device "nvmf_tgt_br2" 00:12:55.890 06:52:00 -- nvmf/common.sh@158 -- # true 00:12:55.890 06:52:00 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:56.149 06:52:00 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:56.149 06:52:00 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:56.149 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:56.149 06:52:00 -- nvmf/common.sh@161 -- # true 00:12:56.149 06:52:00 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:56.149 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:56.149 06:52:00 -- nvmf/common.sh@162 -- # true 00:12:56.149 06:52:00 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:56.149 06:52:00 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:56.149 06:52:00 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:56.149 06:52:00 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:56.149 06:52:00 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:56.149 06:52:00 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:56.149 06:52:00 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:56.149 06:52:00 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:56.149 06:52:00 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:56.149 06:52:00 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:56.149 06:52:00 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:56.149 06:52:00 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:56.149 06:52:00 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:56.149 06:52:00 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:56.149 06:52:00 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:56.149 06:52:00 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:56.149 06:52:00 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:56.149 06:52:00 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:56.149 06:52:00 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:56.149 06:52:00 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:56.149 06:52:00 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:56.149 06:52:00 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:56.149 06:52:00 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:56.149 06:52:00 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:56.149 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:56.149 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:12:56.149 00:12:56.149 --- 10.0.0.2 ping statistics --- 00:12:56.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:56.149 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:12:56.149 06:52:00 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:56.149 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:56.149 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:12:56.149 00:12:56.149 --- 10.0.0.3 ping statistics --- 00:12:56.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:56.149 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:12:56.149 06:52:00 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:56.149 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:56.149 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:12:56.149 00:12:56.149 --- 10.0.0.1 ping statistics --- 00:12:56.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:56.149 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:12:56.149 06:52:00 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:56.149 06:52:00 -- nvmf/common.sh@421 -- # return 0 00:12:56.149 06:52:00 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:56.149 06:52:00 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:56.149 06:52:00 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:56.149 06:52:00 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:56.149 06:52:00 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:56.149 06:52:00 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:56.149 06:52:00 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:56.149 06:52:00 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=78094 00:12:56.149 06:52:00 -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:56.149 06:52:00 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:56.149 06:52:00 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 78094 00:12:56.149 06:52:00 -- common/autotest_common.sh@829 -- # '[' -z 78094 ']' 00:12:56.149 06:52:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:56.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:56.149 06:52:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:56.149 06:52:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:56.150 06:52:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:56.150 06:52:00 -- common/autotest_common.sh@10 -- # set +x 00:12:57.526 06:52:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:57.526 06:52:01 -- common/autotest_common.sh@862 -- # return 0 00:12:57.526 06:52:01 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:57.526 06:52:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.526 06:52:01 -- common/autotest_common.sh@10 -- # set +x 00:12:57.526 06:52:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.526 06:52:01 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:12:57.526 06:52:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.526 06:52:01 -- common/autotest_common.sh@10 -- # set +x 00:12:57.526 Malloc0 00:12:57.526 06:52:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.526 06:52:01 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:57.526 06:52:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.526 06:52:01 -- common/autotest_common.sh@10 -- # set +x 00:12:57.526 06:52:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.526 06:52:01 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:57.526 06:52:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.526 06:52:01 -- common/autotest_common.sh@10 -- # set +x 00:12:57.526 06:52:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.526 06:52:01 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:57.526 06:52:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.526 06:52:01 -- common/autotest_common.sh@10 -- # set +x 00:12:57.526 06:52:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.526 06:52:01 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:12:57.526 06:52:01 -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:12:57.526 Shutting down the fuzz application 00:12:57.526 06:52:01 -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:12:57.785 Shutting down the fuzz application 00:12:57.785 06:52:02 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:57.785 06:52:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.785 06:52:02 -- common/autotest_common.sh@10 -- # set +x 00:12:57.785 06:52:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.785 06:52:02 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:12:57.785 06:52:02 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:12:57.785 06:52:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:57.785 06:52:02 -- nvmf/common.sh@116 -- # sync 00:12:58.043 06:52:02 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:58.043 06:52:02 -- nvmf/common.sh@119 -- # set +e 00:12:58.043 06:52:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:58.043 06:52:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:58.043 rmmod nvme_tcp 00:12:58.043 rmmod nvme_fabrics 00:12:58.043 rmmod nvme_keyring 00:12:58.043 06:52:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:58.043 06:52:02 -- nvmf/common.sh@123 -- # set -e 00:12:58.043 06:52:02 -- nvmf/common.sh@124 -- # return 0 00:12:58.043 06:52:02 -- nvmf/common.sh@477 -- # '[' -n 78094 ']' 00:12:58.043 06:52:02 -- nvmf/common.sh@478 -- # killprocess 78094 00:12:58.043 06:52:02 -- common/autotest_common.sh@936 -- # '[' -z 78094 ']' 00:12:58.043 06:52:02 -- common/autotest_common.sh@940 -- # kill -0 78094 00:12:58.043 06:52:02 -- common/autotest_common.sh@941 -- # uname 00:12:58.043 06:52:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:58.043 06:52:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78094 00:12:58.043 06:52:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:58.043 06:52:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:58.043 killing process with pid 78094 00:12:58.043 06:52:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78094' 00:12:58.043 06:52:02 -- common/autotest_common.sh@955 -- # kill 78094 00:12:58.043 06:52:02 -- common/autotest_common.sh@960 -- # wait 78094 00:12:58.043 06:52:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:58.043 06:52:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:58.043 06:52:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:58.043 06:52:02 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:58.043 06:52:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:58.043 06:52:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:58.043 06:52:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:58.043 06:52:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:58.303 06:52:02 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:58.303 06:52:02 -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:12:58.303 ************************************ 00:12:58.303 END TEST nvmf_fuzz 00:12:58.303 ************************************ 00:12:58.303 00:12:58.303 real 0m2.501s 00:12:58.303 user 0m2.564s 00:12:58.303 sys 0m0.564s 00:12:58.303 06:52:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:58.303 06:52:02 -- common/autotest_common.sh@10 -- # set +x 00:12:58.303 06:52:02 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:12:58.303 06:52:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:58.303 06:52:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:58.303 06:52:02 -- common/autotest_common.sh@10 -- # set +x 00:12:58.303 ************************************ 00:12:58.303 START TEST nvmf_multiconnection 00:12:58.303 ************************************ 00:12:58.303 06:52:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:12:58.303 * Looking for test storage... 00:12:58.303 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:58.303 06:52:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:58.303 06:52:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:58.303 06:52:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:58.303 06:52:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:58.303 06:52:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:58.303 06:52:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:58.303 06:52:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:58.303 06:52:02 -- scripts/common.sh@335 -- # IFS=.-: 00:12:58.303 06:52:02 -- scripts/common.sh@335 -- # read -ra ver1 00:12:58.303 06:52:02 -- scripts/common.sh@336 -- # IFS=.-: 00:12:58.303 06:52:02 -- scripts/common.sh@336 -- # read -ra ver2 00:12:58.303 06:52:02 -- scripts/common.sh@337 -- # local 'op=<' 00:12:58.303 06:52:02 -- scripts/common.sh@339 -- # ver1_l=2 00:12:58.303 06:52:02 -- scripts/common.sh@340 -- # ver2_l=1 00:12:58.303 06:52:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:58.303 06:52:02 -- scripts/common.sh@343 -- # case "$op" in 00:12:58.303 06:52:02 -- scripts/common.sh@344 -- # : 1 00:12:58.303 06:52:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:58.303 06:52:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:58.303 06:52:02 -- scripts/common.sh@364 -- # decimal 1 00:12:58.303 06:52:02 -- scripts/common.sh@352 -- # local d=1 00:12:58.303 06:52:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:58.303 06:52:02 -- scripts/common.sh@354 -- # echo 1 00:12:58.303 06:52:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:58.303 06:52:02 -- scripts/common.sh@365 -- # decimal 2 00:12:58.303 06:52:02 -- scripts/common.sh@352 -- # local d=2 00:12:58.303 06:52:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:58.303 06:52:02 -- scripts/common.sh@354 -- # echo 2 00:12:58.303 06:52:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:58.303 06:52:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:58.303 06:52:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:58.303 06:52:02 -- scripts/common.sh@367 -- # return 0 00:12:58.303 06:52:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:58.303 06:52:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:58.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.303 --rc genhtml_branch_coverage=1 00:12:58.303 --rc genhtml_function_coverage=1 00:12:58.303 --rc genhtml_legend=1 00:12:58.303 --rc geninfo_all_blocks=1 00:12:58.303 --rc geninfo_unexecuted_blocks=1 00:12:58.303 00:12:58.303 ' 00:12:58.303 06:52:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:58.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.303 --rc genhtml_branch_coverage=1 00:12:58.303 --rc genhtml_function_coverage=1 00:12:58.303 --rc genhtml_legend=1 00:12:58.303 --rc geninfo_all_blocks=1 00:12:58.303 --rc geninfo_unexecuted_blocks=1 00:12:58.303 00:12:58.303 ' 00:12:58.303 06:52:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:58.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.303 --rc genhtml_branch_coverage=1 00:12:58.303 --rc genhtml_function_coverage=1 00:12:58.303 --rc genhtml_legend=1 00:12:58.303 --rc geninfo_all_blocks=1 00:12:58.303 --rc geninfo_unexecuted_blocks=1 00:12:58.303 00:12:58.303 ' 00:12:58.303 06:52:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:58.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.303 --rc genhtml_branch_coverage=1 00:12:58.303 --rc genhtml_function_coverage=1 00:12:58.303 --rc genhtml_legend=1 00:12:58.303 --rc geninfo_all_blocks=1 00:12:58.303 --rc geninfo_unexecuted_blocks=1 00:12:58.303 00:12:58.303 ' 00:12:58.303 06:52:02 -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:58.303 06:52:02 -- nvmf/common.sh@7 -- # uname -s 00:12:58.303 06:52:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:58.303 06:52:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:58.303 06:52:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:58.303 06:52:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:58.303 06:52:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:58.303 06:52:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:58.303 06:52:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:58.303 06:52:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:58.303 06:52:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:58.303 06:52:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:58.303 06:52:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:657f0c9c-3891-4064-9841-3d87a573b6e7 00:12:58.303 06:52:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=657f0c9c-3891-4064-9841-3d87a573b6e7 00:12:58.303 06:52:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:58.303 06:52:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:58.303 06:52:02 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:58.303 06:52:02 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:58.303 06:52:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:58.303 06:52:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:58.303 06:52:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:58.304 06:52:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.304 06:52:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.304 06:52:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.304 06:52:02 -- paths/export.sh@5 -- # export PATH 00:12:58.304 06:52:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.304 06:52:02 -- nvmf/common.sh@46 -- # : 0 00:12:58.304 06:52:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:58.304 06:52:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:58.304 06:52:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:58.304 06:52:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:58.304 06:52:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:58.304 06:52:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:58.304 06:52:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:58.304 06:52:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:58.562 06:52:02 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:58.562 06:52:02 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:58.562 06:52:02 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:12:58.562 06:52:02 -- target/multiconnection.sh@16 -- # nvmftestinit 00:12:58.562 06:52:02 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:58.562 06:52:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:58.562 06:52:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:58.562 06:52:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:58.562 06:52:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:58.562 06:52:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:58.562 06:52:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:58.562 06:52:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:58.562 06:52:02 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:58.562 06:52:02 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:58.562 06:52:02 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:58.562 06:52:02 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:58.562 06:52:02 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:58.562 06:52:02 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:58.562 06:52:02 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:58.563 06:52:02 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:58.563 06:52:02 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:58.563 06:52:02 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:58.563 06:52:02 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:58.563 06:52:02 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:58.563 06:52:02 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:58.563 06:52:02 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:58.563 06:52:02 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:58.563 06:52:02 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:58.563 06:52:02 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:58.563 06:52:02 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:58.563 06:52:02 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:58.563 06:52:02 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:58.563 Cannot find device "nvmf_tgt_br" 00:12:58.563 06:52:02 -- nvmf/common.sh@154 -- # true 00:12:58.563 06:52:02 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:58.563 Cannot find device "nvmf_tgt_br2" 00:12:58.563 06:52:02 -- nvmf/common.sh@155 -- # true 00:12:58.563 06:52:02 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:58.563 06:52:02 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:58.563 Cannot find device "nvmf_tgt_br" 00:12:58.563 06:52:02 -- nvmf/common.sh@157 -- # true 00:12:58.563 06:52:02 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:58.563 Cannot find device "nvmf_tgt_br2" 00:12:58.563 06:52:02 -- nvmf/common.sh@158 -- # true 00:12:58.563 06:52:02 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:58.563 06:52:02 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:58.563 06:52:02 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:58.563 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:58.563 06:52:02 -- nvmf/common.sh@161 -- # true 00:12:58.563 06:52:02 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:58.563 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:58.563 06:52:02 -- nvmf/common.sh@162 -- # true 00:12:58.563 06:52:02 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:58.563 06:52:02 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:58.563 06:52:02 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:58.563 06:52:02 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:58.563 06:52:02 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:58.563 06:52:03 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:58.563 06:52:03 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:58.563 06:52:03 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:58.563 06:52:03 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:58.563 06:52:03 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:58.563 06:52:03 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:58.563 06:52:03 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:58.563 06:52:03 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:58.563 06:52:03 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:58.563 06:52:03 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:58.822 06:52:03 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:58.822 06:52:03 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:58.822 06:52:03 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:58.822 06:52:03 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:58.822 06:52:03 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:58.822 06:52:03 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:58.822 06:52:03 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:58.822 06:52:03 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:58.822 06:52:03 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:58.822 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:58.822 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:12:58.822 00:12:58.822 --- 10.0.0.2 ping statistics --- 00:12:58.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:58.822 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:12:58.822 06:52:03 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:58.822 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:58.822 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:12:58.822 00:12:58.822 --- 10.0.0.3 ping statistics --- 00:12:58.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:58.822 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:12:58.822 06:52:03 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:58.822 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:58.822 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.015 ms 00:12:58.822 00:12:58.822 --- 10.0.0.1 ping statistics --- 00:12:58.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:58.822 rtt min/avg/max/mdev = 0.015/0.015/0.015/0.000 ms 00:12:58.822 06:52:03 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:58.822 06:52:03 -- nvmf/common.sh@421 -- # return 0 00:12:58.822 06:52:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:58.822 06:52:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:58.822 06:52:03 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:58.822 06:52:03 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:58.822 06:52:03 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:58.822 06:52:03 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:58.822 06:52:03 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:58.822 06:52:03 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:12:58.822 06:52:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:58.822 06:52:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:58.822 06:52:03 -- common/autotest_common.sh@10 -- # set +x 00:12:58.822 06:52:03 -- nvmf/common.sh@469 -- # nvmfpid=78284 00:12:58.822 06:52:03 -- nvmf/common.sh@470 -- # waitforlisten 78284 00:12:58.822 06:52:03 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:58.822 06:52:03 -- common/autotest_common.sh@829 -- # '[' -z 78284 ']' 00:12:58.822 06:52:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:58.822 06:52:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:58.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:58.822 06:52:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:58.822 06:52:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:58.822 06:52:03 -- common/autotest_common.sh@10 -- # set +x 00:12:58.822 [2024-12-13 06:52:03.239224] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:58.822 [2024-12-13 06:52:03.239311] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:59.082 [2024-12-13 06:52:03.382236] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:59.082 [2024-12-13 06:52:03.414424] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:59.082 [2024-12-13 06:52:03.414585] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:59.082 [2024-12-13 06:52:03.414598] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:59.082 [2024-12-13 06:52:03.414606] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:59.082 [2024-12-13 06:52:03.415203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:59.082 [2024-12-13 06:52:03.415394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:59.082 [2024-12-13 06:52:03.416261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:59.082 [2024-12-13 06:52:03.416326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.019 06:52:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:00.019 06:52:04 -- common/autotest_common.sh@862 -- # return 0 00:13:00.019 06:52:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:00.019 06:52:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:00.019 06:52:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.019 06:52:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:00.019 06:52:04 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:00.019 06:52:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.019 06:52:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.019 [2024-12-13 06:52:04.294682] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:00.019 06:52:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.019 06:52:04 -- target/multiconnection.sh@21 -- # seq 1 11 00:13:00.019 06:52:04 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:00.019 06:52:04 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:00.019 06:52:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.019 06:52:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.019 Malloc1 00:13:00.019 06:52:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.019 06:52:04 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:13:00.019 06:52:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.019 06:52:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.019 06:52:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.019 06:52:04 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:00.019 06:52:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.019 06:52:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.019 06:52:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.019 06:52:04 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:00.019 06:52:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.019 06:52:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.019 [2024-12-13 06:52:04.363406] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:00.019 06:52:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.019 06:52:04 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:00.019 06:52:04 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:13:00.019 06:52:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.019 06:52:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.019 Malloc2 00:13:00.019 06:52:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.019 06:52:04 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:13:00.020 06:52:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.020 06:52:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.020 06:52:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.020 06:52:04 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:13:00.020 06:52:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.020 06:52:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.020 06:52:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.020 06:52:04 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:00.020 06:52:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.020 06:52:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.020 06:52:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.020 06:52:04 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:00.020 06:52:04 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:13:00.020 06:52:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.020 06:52:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.020 Malloc3 00:13:00.020 06:52:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.020 06:52:04 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:13:00.020 06:52:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.020 06:52:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.020 06:52:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.020 06:52:04 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:13:00.020 06:52:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.020 06:52:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.020 06:52:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.020 06:52:04 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:13:00.020 06:52:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.020 06:52:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.020 06:52:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.020 06:52:04 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:00.020 06:52:04 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:13:00.020 06:52:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.020 06:52:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.020 Malloc4 00:13:00.020 06:52:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.020 06:52:04 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:13:00.020 06:52:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.020 06:52:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.020 06:52:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.020 06:52:04 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:13:00.020 06:52:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.020 06:52:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.020 06:52:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.020 06:52:04 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:13:00.020 06:52:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.020 06:52:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.020 06:52:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.020 06:52:04 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:00.020 06:52:04 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:13:00.020 06:52:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.020 06:52:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.020 Malloc5 00:13:00.020 06:52:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.020 06:52:04 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:13:00.020 06:52:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.020 06:52:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.020 06:52:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.020 06:52:04 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:13:00.020 06:52:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.020 06:52:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.020 06:52:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.020 06:52:04 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:13:00.020 06:52:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.020 06:52:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.020 06:52:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.020 06:52:04 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:00.020 06:52:04 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:13:00.020 06:52:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.020 06:52:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.280 Malloc6 00:13:00.280 06:52:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.280 06:52:04 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:13:00.280 06:52:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.280 06:52:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.280 06:52:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.280 06:52:04 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:13:00.280 06:52:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.280 06:52:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.280 06:52:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.280 06:52:04 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:13:00.280 06:52:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.280 06:52:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.280 06:52:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.280 06:52:04 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:00.280 06:52:04 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:13:00.280 06:52:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.280 06:52:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.280 Malloc7 00:13:00.280 06:52:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.280 06:52:04 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:13:00.280 06:52:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.280 06:52:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.280 06:52:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.280 06:52:04 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:13:00.280 06:52:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.280 06:52:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.280 06:52:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.280 06:52:04 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:13:00.280 06:52:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.280 06:52:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.280 06:52:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.280 06:52:04 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:00.280 06:52:04 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:13:00.280 06:52:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.280 06:52:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.280 Malloc8 00:13:00.280 06:52:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.280 06:52:04 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:13:00.280 06:52:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.280 06:52:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.280 06:52:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.280 06:52:04 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:13:00.280 06:52:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.280 06:52:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.280 06:52:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.280 06:52:04 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:13:00.280 06:52:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.280 06:52:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.280 06:52:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.280 06:52:04 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:00.280 06:52:04 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:13:00.280 06:52:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.280 06:52:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.280 Malloc9 00:13:00.280 06:52:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.280 06:52:04 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:13:00.280 06:52:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.280 06:52:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.280 06:52:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.280 06:52:04 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:13:00.280 06:52:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.280 06:52:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.280 06:52:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.280 06:52:04 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:13:00.280 06:52:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.280 06:52:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.280 06:52:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.280 06:52:04 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:00.280 06:52:04 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:13:00.280 06:52:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.280 06:52:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.280 Malloc10 00:13:00.280 06:52:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.280 06:52:04 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:13:00.280 06:52:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.280 06:52:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.280 06:52:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.280 06:52:04 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:13:00.280 06:52:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.280 06:52:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.280 06:52:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.280 06:52:04 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:13:00.280 06:52:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.280 06:52:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.280 06:52:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.280 06:52:04 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:00.280 06:52:04 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:13:00.280 06:52:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.280 06:52:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.280 Malloc11 00:13:00.280 06:52:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.280 06:52:04 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:13:00.280 06:52:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.280 06:52:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.280 06:52:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.280 06:52:04 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:13:00.280 06:52:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.280 06:52:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.280 06:52:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.280 06:52:04 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:13:00.280 06:52:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.280 06:52:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.280 06:52:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.280 06:52:04 -- target/multiconnection.sh@28 -- # seq 1 11 00:13:00.280 06:52:04 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:00.281 06:52:04 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:657f0c9c-3891-4064-9841-3d87a573b6e7 --hostid=657f0c9c-3891-4064-9841-3d87a573b6e7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:00.539 06:52:04 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:13:00.539 06:52:04 -- common/autotest_common.sh@1187 -- # local i=0 00:13:00.539 06:52:04 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:13:00.539 06:52:04 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:13:00.539 06:52:04 -- common/autotest_common.sh@1194 -- # sleep 2 00:13:02.480 06:52:06 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:13:02.480 06:52:06 -- common/autotest_common.sh@1196 -- # grep -c SPDK1 00:13:02.480 06:52:06 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:13:02.480 06:52:06 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:13:02.480 06:52:06 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:13:02.480 06:52:06 -- common/autotest_common.sh@1197 -- # return 0 00:13:02.480 06:52:06 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:02.480 06:52:06 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:657f0c9c-3891-4064-9841-3d87a573b6e7 --hostid=657f0c9c-3891-4064-9841-3d87a573b6e7 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:13:02.739 06:52:07 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:13:02.739 06:52:07 -- common/autotest_common.sh@1187 -- # local i=0 00:13:02.739 06:52:07 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:13:02.739 06:52:07 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:13:02.739 06:52:07 -- common/autotest_common.sh@1194 -- # sleep 2 00:13:04.643 06:52:09 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:13:04.643 06:52:09 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:13:04.643 06:52:09 -- common/autotest_common.sh@1196 -- # grep -c SPDK2 00:13:04.643 06:52:09 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:13:04.643 06:52:09 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:13:04.643 06:52:09 -- common/autotest_common.sh@1197 -- # return 0 00:13:04.643 06:52:09 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:04.643 06:52:09 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:657f0c9c-3891-4064-9841-3d87a573b6e7 --hostid=657f0c9c-3891-4064-9841-3d87a573b6e7 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:13:04.902 06:52:09 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:13:04.902 06:52:09 -- common/autotest_common.sh@1187 -- # local i=0 00:13:04.902 06:52:09 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:13:04.902 06:52:09 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:13:04.902 06:52:09 -- common/autotest_common.sh@1194 -- # sleep 2 00:13:06.807 06:52:11 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:13:06.807 06:52:11 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:13:06.807 06:52:11 -- common/autotest_common.sh@1196 -- # grep -c SPDK3 00:13:06.807 06:52:11 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:13:06.807 06:52:11 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:13:06.807 06:52:11 -- common/autotest_common.sh@1197 -- # return 0 00:13:06.807 06:52:11 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:06.807 06:52:11 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:657f0c9c-3891-4064-9841-3d87a573b6e7 --hostid=657f0c9c-3891-4064-9841-3d87a573b6e7 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:13:07.065 06:52:11 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:13:07.065 06:52:11 -- common/autotest_common.sh@1187 -- # local i=0 00:13:07.065 06:52:11 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:13:07.065 06:52:11 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:13:07.065 06:52:11 -- common/autotest_common.sh@1194 -- # sleep 2 00:13:08.970 06:52:13 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:13:08.970 06:52:13 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:13:08.970 06:52:13 -- common/autotest_common.sh@1196 -- # grep -c SPDK4 00:13:08.970 06:52:13 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:13:08.970 06:52:13 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:13:08.970 06:52:13 -- common/autotest_common.sh@1197 -- # return 0 00:13:08.970 06:52:13 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:08.970 06:52:13 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:657f0c9c-3891-4064-9841-3d87a573b6e7 --hostid=657f0c9c-3891-4064-9841-3d87a573b6e7 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:13:09.228 06:52:13 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:13:09.228 06:52:13 -- common/autotest_common.sh@1187 -- # local i=0 00:13:09.228 06:52:13 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:13:09.228 06:52:13 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:13:09.228 06:52:13 -- common/autotest_common.sh@1194 -- # sleep 2 00:13:11.132 06:52:15 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:13:11.132 06:52:15 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:13:11.132 06:52:15 -- common/autotest_common.sh@1196 -- # grep -c SPDK5 00:13:11.132 06:52:15 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:13:11.132 06:52:15 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:13:11.132 06:52:15 -- common/autotest_common.sh@1197 -- # return 0 00:13:11.132 06:52:15 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:11.132 06:52:15 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:657f0c9c-3891-4064-9841-3d87a573b6e7 --hostid=657f0c9c-3891-4064-9841-3d87a573b6e7 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:13:11.391 06:52:15 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:13:11.391 06:52:15 -- common/autotest_common.sh@1187 -- # local i=0 00:13:11.391 06:52:15 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:13:11.391 06:52:15 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:13:11.391 06:52:15 -- common/autotest_common.sh@1194 -- # sleep 2 00:13:13.295 06:52:17 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:13:13.295 06:52:17 -- common/autotest_common.sh@1196 -- # grep -c SPDK6 00:13:13.295 06:52:17 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:13:13.295 06:52:17 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:13:13.295 06:52:17 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:13:13.295 06:52:17 -- common/autotest_common.sh@1197 -- # return 0 00:13:13.295 06:52:17 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:13.295 06:52:17 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:657f0c9c-3891-4064-9841-3d87a573b6e7 --hostid=657f0c9c-3891-4064-9841-3d87a573b6e7 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:13:13.554 06:52:17 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:13:13.554 06:52:17 -- common/autotest_common.sh@1187 -- # local i=0 00:13:13.554 06:52:17 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:13:13.554 06:52:17 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:13:13.554 06:52:17 -- common/autotest_common.sh@1194 -- # sleep 2 00:13:15.458 06:52:19 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:13:15.458 06:52:19 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:13:15.458 06:52:19 -- common/autotest_common.sh@1196 -- # grep -c SPDK7 00:13:15.459 06:52:19 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:13:15.459 06:52:19 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:13:15.459 06:52:19 -- common/autotest_common.sh@1197 -- # return 0 00:13:15.459 06:52:19 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:15.459 06:52:19 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:657f0c9c-3891-4064-9841-3d87a573b6e7 --hostid=657f0c9c-3891-4064-9841-3d87a573b6e7 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:13:15.718 06:52:20 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:13:15.718 06:52:20 -- common/autotest_common.sh@1187 -- # local i=0 00:13:15.718 06:52:20 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:13:15.718 06:52:20 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:13:15.718 06:52:20 -- common/autotest_common.sh@1194 -- # sleep 2 00:13:17.622 06:52:22 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:13:17.622 06:52:22 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:13:17.622 06:52:22 -- common/autotest_common.sh@1196 -- # grep -c SPDK8 00:13:17.622 06:52:22 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:13:17.622 06:52:22 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:13:17.622 06:52:22 -- common/autotest_common.sh@1197 -- # return 0 00:13:17.622 06:52:22 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:17.622 06:52:22 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:657f0c9c-3891-4064-9841-3d87a573b6e7 --hostid=657f0c9c-3891-4064-9841-3d87a573b6e7 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:13:17.882 06:52:22 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:13:17.882 06:52:22 -- common/autotest_common.sh@1187 -- # local i=0 00:13:17.882 06:52:22 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:13:17.882 06:52:22 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:13:17.882 06:52:22 -- common/autotest_common.sh@1194 -- # sleep 2 00:13:19.789 06:52:24 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:13:19.789 06:52:24 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:13:19.789 06:52:24 -- common/autotest_common.sh@1196 -- # grep -c SPDK9 00:13:19.789 06:52:24 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:13:19.789 06:52:24 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:13:19.789 06:52:24 -- common/autotest_common.sh@1197 -- # return 0 00:13:19.789 06:52:24 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:19.789 06:52:24 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:657f0c9c-3891-4064-9841-3d87a573b6e7 --hostid=657f0c9c-3891-4064-9841-3d87a573b6e7 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:13:20.048 06:52:24 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:13:20.048 06:52:24 -- common/autotest_common.sh@1187 -- # local i=0 00:13:20.048 06:52:24 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:13:20.048 06:52:24 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:13:20.048 06:52:24 -- common/autotest_common.sh@1194 -- # sleep 2 00:13:21.953 06:52:26 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:13:21.953 06:52:26 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:13:21.953 06:52:26 -- common/autotest_common.sh@1196 -- # grep -c SPDK10 00:13:21.953 06:52:26 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:13:21.953 06:52:26 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:13:21.953 06:52:26 -- common/autotest_common.sh@1197 -- # return 0 00:13:21.953 06:52:26 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:21.953 06:52:26 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:657f0c9c-3891-4064-9841-3d87a573b6e7 --hostid=657f0c9c-3891-4064-9841-3d87a573b6e7 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:13:22.211 06:52:26 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:13:22.211 06:52:26 -- common/autotest_common.sh@1187 -- # local i=0 00:13:22.211 06:52:26 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:13:22.211 06:52:26 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:13:22.211 06:52:26 -- common/autotest_common.sh@1194 -- # sleep 2 00:13:24.116 06:52:28 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:13:24.116 06:52:28 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:13:24.116 06:52:28 -- common/autotest_common.sh@1196 -- # grep -c SPDK11 00:13:24.117 06:52:28 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:13:24.117 06:52:28 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:13:24.117 06:52:28 -- common/autotest_common.sh@1197 -- # return 0 00:13:24.117 06:52:28 -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:13:24.117 [global] 00:13:24.117 thread=1 00:13:24.117 invalidate=1 00:13:24.117 rw=read 00:13:24.117 time_based=1 00:13:24.117 runtime=10 00:13:24.117 ioengine=libaio 00:13:24.117 direct=1 00:13:24.117 bs=262144 00:13:24.117 iodepth=64 00:13:24.117 norandommap=1 00:13:24.117 numjobs=1 00:13:24.117 00:13:24.117 [job0] 00:13:24.117 filename=/dev/nvme0n1 00:13:24.117 [job1] 00:13:24.117 filename=/dev/nvme10n1 00:13:24.117 [job2] 00:13:24.117 filename=/dev/nvme1n1 00:13:24.117 [job3] 00:13:24.117 filename=/dev/nvme2n1 00:13:24.117 [job4] 00:13:24.117 filename=/dev/nvme3n1 00:13:24.375 [job5] 00:13:24.375 filename=/dev/nvme4n1 00:13:24.375 [job6] 00:13:24.375 filename=/dev/nvme5n1 00:13:24.375 [job7] 00:13:24.375 filename=/dev/nvme6n1 00:13:24.375 [job8] 00:13:24.375 filename=/dev/nvme7n1 00:13:24.375 [job9] 00:13:24.375 filename=/dev/nvme8n1 00:13:24.375 [job10] 00:13:24.375 filename=/dev/nvme9n1 00:13:24.375 Could not set queue depth (nvme0n1) 00:13:24.375 Could not set queue depth (nvme10n1) 00:13:24.375 Could not set queue depth (nvme1n1) 00:13:24.375 Could not set queue depth (nvme2n1) 00:13:24.375 Could not set queue depth (nvme3n1) 00:13:24.375 Could not set queue depth (nvme4n1) 00:13:24.375 Could not set queue depth (nvme5n1) 00:13:24.376 Could not set queue depth (nvme6n1) 00:13:24.376 Could not set queue depth (nvme7n1) 00:13:24.376 Could not set queue depth (nvme8n1) 00:13:24.376 Could not set queue depth (nvme9n1) 00:13:24.634 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:24.634 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:24.634 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:24.634 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:24.634 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:24.634 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:24.634 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:24.634 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:24.634 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:24.634 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:24.634 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:24.634 fio-3.35 00:13:24.634 Starting 11 threads 00:13:36.846 00:13:36.846 job0: (groupid=0, jobs=1): err= 0: pid=78748: Fri Dec 13 06:52:39 2024 00:13:36.846 read: IOPS=799, BW=200MiB/s (209MB/s)(2012MiB/10071msec) 00:13:36.846 slat (usec): min=20, max=19421, avg=1240.24, stdev=2629.01 00:13:36.846 clat (msec): min=11, max=155, avg=78.76, stdev=14.47 00:13:36.846 lat (msec): min=11, max=155, avg=80.00, stdev=14.67 00:13:36.846 clat percentiles (msec): 00:13:36.846 | 1.00th=[ 46], 5.00th=[ 56], 10.00th=[ 59], 20.00th=[ 64], 00:13:36.846 | 30.00th=[ 69], 40.00th=[ 80], 50.00th=[ 85], 60.00th=[ 87], 00:13:36.846 | 70.00th=[ 89], 80.00th=[ 91], 90.00th=[ 93], 95.00th=[ 96], 00:13:36.846 | 99.00th=[ 102], 99.50th=[ 107], 99.90th=[ 144], 99.95th=[ 146], 00:13:36.846 | 99.99th=[ 157] 00:13:36.846 bw ( KiB/s): min=177664, max=265728, per=9.80%, avg=204313.15, stdev=33864.04, samples=20 00:13:36.846 iops : min= 694, max= 1038, avg=798.00, stdev=132.12, samples=20 00:13:36.846 lat (msec) : 20=0.07%, 50=1.63%, 100=96.84%, 250=1.45% 00:13:36.846 cpu : usr=0.36%, sys=2.74%, ctx=1932, majf=0, minf=4097 00:13:36.846 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:13:36.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:36.846 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:36.846 issued rwts: total=8047,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:36.846 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:36.846 job1: (groupid=0, jobs=1): err= 0: pid=78749: Fri Dec 13 06:52:39 2024 00:13:36.846 read: IOPS=402, BW=101MiB/s (106MB/s)(1018MiB/10109msec) 00:13:36.846 slat (usec): min=20, max=150666, avg=2452.50, stdev=7462.50 00:13:36.846 clat (msec): min=104, max=315, avg=156.36, stdev=19.79 00:13:36.846 lat (msec): min=119, max=339, avg=158.81, stdev=20.86 00:13:36.846 clat percentiles (msec): 00:13:36.846 | 1.00th=[ 136], 5.00th=[ 138], 10.00th=[ 140], 20.00th=[ 142], 00:13:36.846 | 30.00th=[ 144], 40.00th=[ 146], 50.00th=[ 148], 60.00th=[ 153], 00:13:36.846 | 70.00th=[ 161], 80.00th=[ 174], 90.00th=[ 182], 95.00th=[ 201], 00:13:36.846 | 99.00th=[ 213], 99.50th=[ 236], 99.90th=[ 255], 99.95th=[ 271], 00:13:36.846 | 99.99th=[ 317] 00:13:36.846 bw ( KiB/s): min=64512, max=116736, per=4.92%, avg=102579.20, stdev=13426.83, samples=20 00:13:36.846 iops : min= 252, max= 456, avg=400.70, stdev=52.45, samples=20 00:13:36.846 lat (msec) : 250=99.88%, 500=0.12% 00:13:36.846 cpu : usr=0.28%, sys=1.66%, ctx=982, majf=0, minf=4097 00:13:36.846 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:13:36.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:36.846 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:36.846 issued rwts: total=4070,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:36.846 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:36.846 job2: (groupid=0, jobs=1): err= 0: pid=78750: Fri Dec 13 06:52:39 2024 00:13:36.846 read: IOPS=1912, BW=478MiB/s (501MB/s)(4785MiB/10007msec) 00:13:36.846 slat (usec): min=15, max=11157, avg=518.52, stdev=1035.68 00:13:36.846 clat (usec): min=6501, max=49202, avg=32914.69, stdev=2005.10 00:13:36.846 lat (usec): min=8599, max=49233, avg=33433.21, stdev=2014.39 00:13:36.846 clat percentiles (usec): 00:13:36.846 | 1.00th=[28443], 5.00th=[30016], 10.00th=[30802], 20.00th=[31589], 00:13:36.846 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32900], 60.00th=[33162], 00:13:36.846 | 70.00th=[33817], 80.00th=[34341], 90.00th=[35390], 95.00th=[35914], 00:13:36.846 | 99.00th=[38011], 99.50th=[39584], 99.90th=[43254], 99.95th=[45351], 00:13:36.846 | 99.99th=[49021] 00:13:36.846 bw ( KiB/s): min=451975, max=496640, per=23.45%, avg=488682.21, stdev=10066.81, samples=19 00:13:36.846 iops : min= 1765, max= 1940, avg=1908.74, stdev=39.40, samples=19 00:13:36.846 lat (msec) : 10=0.03%, 20=0.08%, 50=99.89% 00:13:36.846 cpu : usr=0.88%, sys=5.49%, ctx=4177, majf=0, minf=4097 00:13:36.846 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:13:36.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:36.846 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:36.846 issued rwts: total=19141,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:36.846 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:36.846 job3: (groupid=0, jobs=1): err= 0: pid=78751: Fri Dec 13 06:52:39 2024 00:13:36.846 read: IOPS=601, BW=150MiB/s (158MB/s)(1516MiB/10072msec) 00:13:36.846 slat (usec): min=16, max=141738, avg=1618.05, stdev=5745.05 00:13:36.846 clat (msec): min=13, max=269, avg=104.58, stdev=37.10 00:13:36.846 lat (msec): min=17, max=330, avg=106.20, stdev=37.95 00:13:36.846 clat percentiles (msec): 00:13:36.846 | 1.00th=[ 53], 5.00th=[ 82], 10.00th=[ 84], 20.00th=[ 86], 00:13:36.846 | 30.00th=[ 87], 40.00th=[ 89], 50.00th=[ 90], 60.00th=[ 92], 00:13:36.846 | 70.00th=[ 94], 80.00th=[ 100], 90.00th=[ 176], 95.00th=[ 186], 00:13:36.846 | 99.00th=[ 211], 99.50th=[ 215], 99.90th=[ 230], 99.95th=[ 253], 00:13:36.846 | 99.99th=[ 271] 00:13:36.846 bw ( KiB/s): min=72704, max=188416, per=7.37%, avg=153607.25, stdev=44774.41, samples=20 00:13:36.846 iops : min= 284, max= 736, avg=600.00, stdev=174.95, samples=20 00:13:36.846 lat (msec) : 20=0.16%, 50=0.74%, 100=79.70%, 250=19.31%, 500=0.08% 00:13:36.846 cpu : usr=0.32%, sys=1.97%, ctx=1535, majf=0, minf=4097 00:13:36.846 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:13:36.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:36.846 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:36.846 issued rwts: total=6063,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:36.846 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:36.846 job4: (groupid=0, jobs=1): err= 0: pid=78752: Fri Dec 13 06:52:39 2024 00:13:36.846 read: IOPS=404, BW=101MiB/s (106MB/s)(1022MiB/10109msec) 00:13:36.847 slat (usec): min=19, max=144204, avg=2441.70, stdev=8281.45 00:13:36.847 clat (msec): min=62, max=303, avg=155.63, stdev=22.20 00:13:36.847 lat (msec): min=63, max=341, avg=158.08, stdev=23.53 00:13:36.847 clat percentiles (msec): 00:13:36.847 | 1.00th=[ 131], 5.00th=[ 138], 10.00th=[ 140], 20.00th=[ 142], 00:13:36.847 | 30.00th=[ 144], 40.00th=[ 146], 50.00th=[ 148], 60.00th=[ 150], 00:13:36.847 | 70.00th=[ 161], 80.00th=[ 174], 90.00th=[ 182], 95.00th=[ 203], 00:13:36.847 | 99.00th=[ 226], 99.50th=[ 241], 99.90th=[ 257], 99.95th=[ 262], 00:13:36.847 | 99.99th=[ 305] 00:13:36.847 bw ( KiB/s): min=63488, max=119808, per=4.94%, avg=103014.40, stdev=13178.25, samples=20 00:13:36.847 iops : min= 248, max= 468, avg=402.40, stdev=51.48, samples=20 00:13:36.847 lat (msec) : 100=0.86%, 250=98.83%, 500=0.32% 00:13:36.847 cpu : usr=0.34%, sys=1.50%, ctx=984, majf=0, minf=4097 00:13:36.847 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:13:36.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:36.847 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:36.847 issued rwts: total=4087,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:36.847 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:36.847 job5: (groupid=0, jobs=1): err= 0: pid=78753: Fri Dec 13 06:52:39 2024 00:13:36.847 read: IOPS=407, BW=102MiB/s (107MB/s)(1029MiB/10110msec) 00:13:36.847 slat (usec): min=20, max=158610, avg=2426.42, stdev=7957.27 00:13:36.847 clat (msec): min=78, max=329, avg=154.60, stdev=21.43 00:13:36.847 lat (msec): min=78, max=347, avg=157.03, stdev=22.74 00:13:36.847 clat percentiles (msec): 00:13:36.847 | 1.00th=[ 86], 5.00th=[ 138], 10.00th=[ 140], 20.00th=[ 142], 00:13:36.847 | 30.00th=[ 144], 40.00th=[ 146], 50.00th=[ 148], 60.00th=[ 150], 00:13:36.847 | 70.00th=[ 157], 80.00th=[ 174], 90.00th=[ 180], 95.00th=[ 201], 00:13:36.847 | 99.00th=[ 215], 99.50th=[ 239], 99.90th=[ 251], 99.95th=[ 266], 00:13:36.847 | 99.99th=[ 330] 00:13:36.847 bw ( KiB/s): min=79872, max=117248, per=4.98%, avg=103764.90, stdev=11373.94, samples=20 00:13:36.847 iops : min= 312, max= 458, avg=405.30, stdev=44.50, samples=20 00:13:36.847 lat (msec) : 100=1.55%, 250=98.32%, 500=0.12% 00:13:36.847 cpu : usr=0.24%, sys=1.93%, ctx=984, majf=0, minf=4097 00:13:36.847 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:13:36.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:36.847 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:36.847 issued rwts: total=4117,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:36.847 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:36.847 job6: (groupid=0, jobs=1): err= 0: pid=78754: Fri Dec 13 06:52:39 2024 00:13:36.847 read: IOPS=792, BW=198MiB/s (208MB/s)(1996MiB/10067msec) 00:13:36.847 slat (usec): min=19, max=41490, avg=1252.30, stdev=2723.00 00:13:36.847 clat (msec): min=27, max=152, avg=79.39, stdev=14.21 00:13:36.847 lat (msec): min=40, max=152, avg=80.65, stdev=14.40 00:13:36.847 clat percentiles (msec): 00:13:36.847 | 1.00th=[ 51], 5.00th=[ 57], 10.00th=[ 59], 20.00th=[ 64], 00:13:36.847 | 30.00th=[ 68], 40.00th=[ 81], 50.00th=[ 86], 60.00th=[ 88], 00:13:36.847 | 70.00th=[ 90], 80.00th=[ 92], 90.00th=[ 94], 95.00th=[ 96], 00:13:36.847 | 99.00th=[ 102], 99.50th=[ 105], 99.90th=[ 142], 99.95th=[ 144], 00:13:36.847 | 99.99th=[ 153] 00:13:36.847 bw ( KiB/s): min=175967, max=260087, per=9.73%, avg=202710.40, stdev=33407.65, samples=20 00:13:36.847 iops : min= 687, max= 1015, avg=791.70, stdev=130.47, samples=20 00:13:36.847 lat (msec) : 50=0.89%, 100=97.85%, 250=1.27% 00:13:36.847 cpu : usr=0.41%, sys=3.26%, ctx=1828, majf=0, minf=4097 00:13:36.847 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:13:36.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:36.847 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:36.847 issued rwts: total=7982,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:36.847 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:36.847 job7: (groupid=0, jobs=1): err= 0: pid=78755: Fri Dec 13 06:52:39 2024 00:13:36.847 read: IOPS=981, BW=245MiB/s (257MB/s)(2457MiB/10016msec) 00:13:36.847 slat (usec): min=15, max=127737, avg=994.63, stdev=2882.70 00:13:36.847 clat (msec): min=6, max=222, avg=64.19, stdev=21.65 00:13:36.847 lat (msec): min=8, max=320, avg=65.19, stdev=22.03 00:13:36.847 clat percentiles (msec): 00:13:36.847 | 1.00th=[ 36], 5.00th=[ 53], 10.00th=[ 55], 20.00th=[ 58], 00:13:36.847 | 30.00th=[ 59], 40.00th=[ 61], 50.00th=[ 62], 60.00th=[ 63], 00:13:36.847 | 70.00th=[ 65], 80.00th=[ 66], 90.00th=[ 69], 95.00th=[ 72], 00:13:36.847 | 99.00th=[ 203], 99.50th=[ 205], 99.90th=[ 222], 99.95th=[ 222], 00:13:36.847 | 99.99th=[ 222] 00:13:36.847 bw ( KiB/s): min=79872, max=269312, per=12.00%, avg=250062.05, stdev=41857.33, samples=20 00:13:36.847 iops : min= 312, max= 1052, avg=976.70, stdev=163.49, samples=20 00:13:36.847 lat (msec) : 10=0.13%, 20=0.46%, 50=2.75%, 100=94.19%, 250=2.47% 00:13:36.847 cpu : usr=0.51%, sys=3.18%, ctx=2190, majf=0, minf=4097 00:13:36.847 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:13:36.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:36.847 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:36.847 issued rwts: total=9827,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:36.847 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:36.847 job8: (groupid=0, jobs=1): err= 0: pid=78756: Fri Dec 13 06:52:39 2024 00:13:36.847 read: IOPS=403, BW=101MiB/s (106MB/s)(1021MiB/10113msec) 00:13:36.847 slat (usec): min=16, max=139067, avg=2453.13, stdev=8385.99 00:13:36.847 clat (msec): min=73, max=305, avg=155.80, stdev=19.29 00:13:36.847 lat (msec): min=114, max=331, avg=158.25, stdev=20.83 00:13:36.847 clat percentiles (msec): 00:13:36.847 | 1.00th=[ 136], 5.00th=[ 138], 10.00th=[ 140], 20.00th=[ 142], 00:13:36.847 | 30.00th=[ 144], 40.00th=[ 146], 50.00th=[ 148], 60.00th=[ 150], 00:13:36.847 | 70.00th=[ 161], 80.00th=[ 174], 90.00th=[ 180], 95.00th=[ 194], 00:13:36.847 | 99.00th=[ 213], 99.50th=[ 232], 99.90th=[ 264], 99.95th=[ 268], 00:13:36.847 | 99.99th=[ 305] 00:13:36.847 bw ( KiB/s): min=64000, max=120320, per=4.94%, avg=102937.60, stdev=13701.67, samples=20 00:13:36.847 iops : min= 250, max= 470, avg=402.10, stdev=53.52, samples=20 00:13:36.847 lat (msec) : 100=0.02%, 250=99.61%, 500=0.37% 00:13:36.847 cpu : usr=0.13%, sys=1.33%, ctx=1053, majf=0, minf=4097 00:13:36.847 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:13:36.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:36.847 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:36.847 issued rwts: total=4085,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:36.847 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:36.847 job9: (groupid=0, jobs=1): err= 0: pid=78757: Fri Dec 13 06:52:39 2024 00:13:36.847 read: IOPS=1080, BW=270MiB/s (283MB/s)(2708MiB/10024msec) 00:13:36.847 slat (usec): min=19, max=15471, avg=919.04, stdev=2020.71 00:13:36.847 clat (usec): min=15622, max=80903, avg=58259.44, stdev=10333.41 00:13:36.847 lat (usec): min=23986, max=86903, avg=59178.48, stdev=10458.41 00:13:36.847 clat percentiles (usec): 00:13:36.847 | 1.00th=[30540], 5.00th=[32900], 10.00th=[35914], 20.00th=[54789], 00:13:36.847 | 30.00th=[57934], 40.00th=[59507], 50.00th=[60556], 60.00th=[62129], 00:13:36.847 | 70.00th=[63701], 80.00th=[65274], 90.00th=[67634], 95.00th=[69731], 00:13:36.847 | 99.00th=[73925], 99.50th=[76022], 99.90th=[79168], 99.95th=[79168], 00:13:36.847 | 99.99th=[81265] 00:13:36.847 bw ( KiB/s): min=246272, max=459776, per=13.22%, avg=275635.85, stdev=47946.76, samples=20 00:13:36.847 iops : min= 962, max= 1796, avg=1076.70, stdev=187.29, samples=20 00:13:36.847 lat (msec) : 20=0.01%, 50=13.27%, 100=86.72% 00:13:36.847 cpu : usr=0.53%, sys=4.19%, ctx=2314, majf=0, minf=4097 00:13:36.847 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:13:36.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:36.847 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:36.847 issued rwts: total=10831,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:36.847 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:36.847 job10: (groupid=0, jobs=1): err= 0: pid=78758: Fri Dec 13 06:52:39 2024 00:13:36.847 read: IOPS=404, BW=101MiB/s (106MB/s)(1022MiB/10107msec) 00:13:36.847 slat (usec): min=20, max=136685, avg=2441.31, stdev=7545.00 00:13:36.847 clat (msec): min=106, max=273, avg=155.57, stdev=19.60 00:13:36.847 lat (msec): min=106, max=328, avg=158.01, stdev=20.79 00:13:36.847 clat percentiles (msec): 00:13:36.847 | 1.00th=[ 133], 5.00th=[ 138], 10.00th=[ 140], 20.00th=[ 142], 00:13:36.847 | 30.00th=[ 144], 40.00th=[ 146], 50.00th=[ 148], 60.00th=[ 150], 00:13:36.847 | 70.00th=[ 161], 80.00th=[ 174], 90.00th=[ 180], 95.00th=[ 197], 00:13:36.847 | 99.00th=[ 218], 99.50th=[ 236], 99.90th=[ 253], 99.95th=[ 253], 00:13:36.847 | 99.99th=[ 275] 00:13:36.847 bw ( KiB/s): min=65667, max=114176, per=4.95%, avg=103072.15, stdev=12739.36, samples=20 00:13:36.847 iops : min= 256, max= 446, avg=402.60, stdev=49.84, samples=20 00:13:36.847 lat (msec) : 250=99.83%, 500=0.17% 00:13:36.847 cpu : usr=0.27%, sys=1.30%, ctx=1039, majf=0, minf=4097 00:13:36.847 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:13:36.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:36.847 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:36.847 issued rwts: total=4089,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:36.847 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:36.847 00:13:36.847 Run status group 0 (all jobs): 00:13:36.847 READ: bw=2035MiB/s (2134MB/s), 101MiB/s-478MiB/s (106MB/s-501MB/s), io=20.1GiB (21.6GB), run=10007-10113msec 00:13:36.847 00:13:36.847 Disk stats (read/write): 00:13:36.847 nvme0n1: ios=15974/0, merge=0/0, ticks=1232478/0, in_queue=1232478, util=97.80% 00:13:36.847 nvme10n1: ios=8034/0, merge=0/0, ticks=1227535/0, in_queue=1227535, util=98.00% 00:13:36.847 nvme1n1: ios=38227/0, merge=0/0, ticks=1243607/0, in_queue=1243607, util=98.09% 00:13:36.847 nvme2n1: ios=12025/0, merge=0/0, ticks=1234230/0, in_queue=1234230, util=98.24% 00:13:36.847 nvme3n1: ios=8071/0, merge=0/0, ticks=1229538/0, in_queue=1229538, util=98.30% 00:13:36.847 nvme4n1: ios=8114/0, merge=0/0, ticks=1228098/0, in_queue=1228098, util=98.57% 00:13:36.847 nvme5n1: ios=15841/0, merge=0/0, ticks=1232410/0, in_queue=1232410, util=98.57% 00:13:36.847 nvme6n1: ios=19537/0, merge=0/0, ticks=1238853/0, in_queue=1238853, util=98.62% 00:13:36.847 nvme7n1: ios=8045/0, merge=0/0, ticks=1229077/0, in_queue=1229077, util=99.04% 00:13:36.847 nvme8n1: ios=21556/0, merge=0/0, ticks=1238736/0, in_queue=1238736, util=99.09% 00:13:36.847 nvme9n1: ios=8073/0, merge=0/0, ticks=1228223/0, in_queue=1228223, util=99.22% 00:13:36.847 06:52:39 -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:13:36.847 [global] 00:13:36.847 thread=1 00:13:36.847 invalidate=1 00:13:36.847 rw=randwrite 00:13:36.847 time_based=1 00:13:36.848 runtime=10 00:13:36.848 ioengine=libaio 00:13:36.848 direct=1 00:13:36.848 bs=262144 00:13:36.848 iodepth=64 00:13:36.848 norandommap=1 00:13:36.848 numjobs=1 00:13:36.848 00:13:36.848 [job0] 00:13:36.848 filename=/dev/nvme0n1 00:13:36.848 [job1] 00:13:36.848 filename=/dev/nvme10n1 00:13:36.848 [job2] 00:13:36.848 filename=/dev/nvme1n1 00:13:36.848 [job3] 00:13:36.848 filename=/dev/nvme2n1 00:13:36.848 [job4] 00:13:36.848 filename=/dev/nvme3n1 00:13:36.848 [job5] 00:13:36.848 filename=/dev/nvme4n1 00:13:36.848 [job6] 00:13:36.848 filename=/dev/nvme5n1 00:13:36.848 [job7] 00:13:36.848 filename=/dev/nvme6n1 00:13:36.848 [job8] 00:13:36.848 filename=/dev/nvme7n1 00:13:36.848 [job9] 00:13:36.848 filename=/dev/nvme8n1 00:13:36.848 [job10] 00:13:36.848 filename=/dev/nvme9n1 00:13:36.848 Could not set queue depth (nvme0n1) 00:13:36.848 Could not set queue depth (nvme10n1) 00:13:36.848 Could not set queue depth (nvme1n1) 00:13:36.848 Could not set queue depth (nvme2n1) 00:13:36.848 Could not set queue depth (nvme3n1) 00:13:36.848 Could not set queue depth (nvme4n1) 00:13:36.848 Could not set queue depth (nvme5n1) 00:13:36.848 Could not set queue depth (nvme6n1) 00:13:36.848 Could not set queue depth (nvme7n1) 00:13:36.848 Could not set queue depth (nvme8n1) 00:13:36.848 Could not set queue depth (nvme9n1) 00:13:36.848 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:36.848 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:36.848 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:36.848 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:36.848 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:36.848 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:36.848 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:36.848 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:36.848 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:36.848 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:36.848 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:36.848 fio-3.35 00:13:36.848 Starting 11 threads 00:13:46.824 00:13:46.824 job0: (groupid=0, jobs=1): err= 0: pid=78958: Fri Dec 13 06:52:50 2024 00:13:46.824 write: IOPS=409, BW=102MiB/s (107MB/s)(1038MiB/10135msec); 0 zone resets 00:13:46.824 slat (usec): min=16, max=12443, avg=2385.93, stdev=4127.25 00:13:46.824 clat (msec): min=33, max=287, avg=153.84, stdev=14.22 00:13:46.824 lat (msec): min=33, max=287, avg=156.23, stdev=13.85 00:13:46.824 clat percentiles (msec): 00:13:46.824 | 1.00th=[ 97], 5.00th=[ 144], 10.00th=[ 146], 20.00th=[ 148], 00:13:46.824 | 30.00th=[ 153], 40.00th=[ 155], 50.00th=[ 157], 60.00th=[ 157], 00:13:46.824 | 70.00th=[ 159], 80.00th=[ 159], 90.00th=[ 161], 95.00th=[ 161], 00:13:46.824 | 99.00th=[ 192], 99.50th=[ 241], 99.90th=[ 279], 99.95th=[ 279], 00:13:46.824 | 99.99th=[ 288] 00:13:46.824 bw ( KiB/s): min=102400, max=106709, per=9.19%, avg=104616.95, stdev=1552.34, samples=20 00:13:46.824 iops : min= 400, max= 416, avg=408.60, stdev= 6.01, samples=20 00:13:46.824 lat (msec) : 50=0.31%, 100=0.75%, 250=98.60%, 500=0.34% 00:13:46.824 cpu : usr=0.86%, sys=1.10%, ctx=2699, majf=0, minf=1 00:13:46.824 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:13:46.824 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:46.824 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:46.824 issued rwts: total=0,4150,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:46.824 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:46.824 job1: (groupid=0, jobs=1): err= 0: pid=78959: Fri Dec 13 06:52:50 2024 00:13:46.824 write: IOPS=287, BW=71.9MiB/s (75.3MB/s)(732MiB/10187msec); 0 zone resets 00:13:46.824 slat (usec): min=15, max=63896, avg=3410.53, stdev=6190.14 00:13:46.824 clat (msec): min=51, max=402, avg=219.17, stdev=23.68 00:13:46.824 lat (msec): min=51, max=402, avg=222.58, stdev=23.20 00:13:46.824 clat percentiles (msec): 00:13:46.824 | 1.00th=[ 113], 5.00th=[ 201], 10.00th=[ 205], 20.00th=[ 211], 00:13:46.824 | 30.00th=[ 218], 40.00th=[ 220], 50.00th=[ 222], 60.00th=[ 224], 00:13:46.824 | 70.00th=[ 224], 80.00th=[ 226], 90.00th=[ 232], 95.00th=[ 239], 00:13:46.824 | 99.00th=[ 305], 99.50th=[ 347], 99.90th=[ 388], 99.95th=[ 401], 00:13:46.824 | 99.99th=[ 401] 00:13:46.824 bw ( KiB/s): min=69632, max=75776, per=6.44%, avg=73336.15, stdev=1792.68, samples=20 00:13:46.824 iops : min= 272, max= 296, avg=286.40, stdev= 7.00, samples=20 00:13:46.824 lat (msec) : 100=0.82%, 250=97.61%, 500=1.57% 00:13:46.824 cpu : usr=0.50%, sys=0.85%, ctx=2484, majf=0, minf=1 00:13:46.824 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.8% 00:13:46.824 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:46.824 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:46.824 issued rwts: total=0,2928,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:46.824 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:46.824 job2: (groupid=0, jobs=1): err= 0: pid=78971: Fri Dec 13 06:52:50 2024 00:13:46.824 write: IOPS=409, BW=102MiB/s (107MB/s)(1038MiB/10144msec); 0 zone resets 00:13:46.824 slat (usec): min=19, max=13917, avg=2404.50, stdev=4132.18 00:13:46.824 clat (msec): min=8, max=289, avg=153.89, stdev=14.94 00:13:46.824 lat (msec): min=8, max=289, avg=156.30, stdev=14.58 00:13:46.824 clat percentiles (msec): 00:13:46.824 | 1.00th=[ 94], 5.00th=[ 144], 10.00th=[ 146], 20.00th=[ 148], 00:13:46.824 | 30.00th=[ 153], 40.00th=[ 155], 50.00th=[ 157], 60.00th=[ 157], 00:13:46.824 | 70.00th=[ 159], 80.00th=[ 159], 90.00th=[ 161], 95.00th=[ 161], 00:13:46.824 | 99.00th=[ 194], 99.50th=[ 243], 99.90th=[ 279], 99.95th=[ 279], 00:13:46.824 | 99.99th=[ 292] 00:13:46.824 bw ( KiB/s): min=101888, max=107520, per=9.19%, avg=104657.40, stdev=1487.37, samples=20 00:13:46.824 iops : min= 398, max= 420, avg=408.80, stdev= 5.81, samples=20 00:13:46.824 lat (msec) : 10=0.05%, 50=0.48%, 100=0.48%, 250=98.55%, 500=0.43% 00:13:46.824 cpu : usr=0.77%, sys=1.15%, ctx=3928, majf=0, minf=1 00:13:46.824 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:13:46.824 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:46.824 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:46.824 issued rwts: total=0,4152,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:46.825 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:46.825 job3: (groupid=0, jobs=1): err= 0: pid=78972: Fri Dec 13 06:52:50 2024 00:13:46.825 write: IOPS=291, BW=73.0MiB/s (76.5MB/s)(743MiB/10188msec); 0 zone resets 00:13:46.825 slat (usec): min=17, max=87984, avg=3360.59, stdev=6101.57 00:13:46.825 clat (msec): min=89, max=396, avg=215.87, stdev=18.93 00:13:46.825 lat (msec): min=89, max=396, avg=219.23, stdev=18.20 00:13:46.825 clat percentiles (msec): 00:13:46.825 | 1.00th=[ 171], 5.00th=[ 199], 10.00th=[ 203], 20.00th=[ 209], 00:13:46.825 | 30.00th=[ 211], 40.00th=[ 215], 50.00th=[ 220], 60.00th=[ 220], 00:13:46.825 | 70.00th=[ 222], 80.00th=[ 224], 90.00th=[ 224], 95.00th=[ 226], 00:13:46.825 | 99.00th=[ 300], 99.50th=[ 342], 99.90th=[ 384], 99.95th=[ 397], 00:13:46.825 | 99.99th=[ 397] 00:13:46.825 bw ( KiB/s): min=65667, max=77824, per=6.54%, avg=74487.75, stdev=2568.09, samples=20 00:13:46.825 iops : min= 256, max= 304, avg=290.90, stdev=10.14, samples=20 00:13:46.825 lat (msec) : 100=0.17%, 250=98.32%, 500=1.51% 00:13:46.825 cpu : usr=0.56%, sys=0.89%, ctx=2731, majf=0, minf=1 00:13:46.825 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:13:46.825 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:46.825 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:46.825 issued rwts: total=0,2973,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:46.825 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:46.825 job4: (groupid=0, jobs=1): err= 0: pid=78973: Fri Dec 13 06:52:50 2024 00:13:46.825 write: IOPS=285, BW=71.5MiB/s (75.0MB/s)(728MiB/10183msec); 0 zone resets 00:13:46.825 slat (usec): min=16, max=130359, avg=3429.11, stdev=6515.44 00:13:46.825 clat (msec): min=132, max=400, avg=220.27, stdev=18.38 00:13:46.825 lat (msec): min=132, max=400, avg=223.70, stdev=17.42 00:13:46.825 clat percentiles (msec): 00:13:46.825 | 1.00th=[ 186], 5.00th=[ 203], 10.00th=[ 205], 20.00th=[ 211], 00:13:46.825 | 30.00th=[ 215], 40.00th=[ 220], 50.00th=[ 220], 60.00th=[ 222], 00:13:46.825 | 70.00th=[ 224], 80.00th=[ 226], 90.00th=[ 234], 95.00th=[ 236], 00:13:46.825 | 99.00th=[ 305], 99.50th=[ 347], 99.90th=[ 388], 99.95th=[ 401], 00:13:46.825 | 99.99th=[ 401] 00:13:46.825 bw ( KiB/s): min=57344, max=75776, per=6.40%, avg=72927.20, stdev=3990.91, samples=20 00:13:46.825 iops : min= 224, max= 296, avg=284.85, stdev=15.60, samples=20 00:13:46.825 lat (msec) : 250=97.77%, 500=2.23% 00:13:46.825 cpu : usr=0.43%, sys=0.73%, ctx=3504, majf=0, minf=1 00:13:46.825 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.8% 00:13:46.825 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:46.825 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:46.825 issued rwts: total=0,2912,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:46.825 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:46.825 job5: (groupid=0, jobs=1): err= 0: pid=78974: Fri Dec 13 06:52:50 2024 00:13:46.825 write: IOPS=412, BW=103MiB/s (108MB/s)(1046MiB/10137msec); 0 zone resets 00:13:46.825 slat (usec): min=18, max=29330, avg=2364.60, stdev=4118.93 00:13:46.825 clat (msec): min=2, max=286, avg=152.67, stdev=19.80 00:13:46.825 lat (msec): min=3, max=286, avg=155.03, stdev=19.68 00:13:46.825 clat percentiles (msec): 00:13:46.825 | 1.00th=[ 27], 5.00th=[ 144], 10.00th=[ 146], 20.00th=[ 148], 00:13:46.825 | 30.00th=[ 153], 40.00th=[ 155], 50.00th=[ 157], 60.00th=[ 157], 00:13:46.825 | 70.00th=[ 159], 80.00th=[ 159], 90.00th=[ 161], 95.00th=[ 161], 00:13:46.825 | 99.00th=[ 190], 99.50th=[ 239], 99.90th=[ 279], 99.95th=[ 279], 00:13:46.825 | 99.99th=[ 288] 00:13:46.825 bw ( KiB/s): min=102400, max=123392, per=9.26%, avg=105451.10, stdev=4472.11, samples=20 00:13:46.825 iops : min= 400, max= 482, avg=411.90, stdev=17.47, samples=20 00:13:46.825 lat (msec) : 4=0.05%, 10=0.17%, 20=0.43%, 50=0.69%, 100=0.72% 00:13:46.825 lat (msec) : 250=97.61%, 500=0.33% 00:13:46.825 cpu : usr=0.74%, sys=1.29%, ctx=7463, majf=0, minf=1 00:13:46.825 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:13:46.825 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:46.825 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:46.825 issued rwts: total=0,4183,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:46.825 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:46.825 job6: (groupid=0, jobs=1): err= 0: pid=78975: Fri Dec 13 06:52:50 2024 00:13:46.825 write: IOPS=1089, BW=272MiB/s (286MB/s)(2738MiB/10054msec); 0 zone resets 00:13:46.825 slat (usec): min=17, max=9384, avg=908.25, stdev=1519.25 00:13:46.825 clat (msec): min=11, max=109, avg=57.83, stdev= 3.86 00:13:46.825 lat (msec): min=11, max=109, avg=58.74, stdev= 3.72 00:13:46.825 clat percentiles (msec): 00:13:46.825 | 1.00th=[ 54], 5.00th=[ 55], 10.00th=[ 55], 20.00th=[ 56], 00:13:46.825 | 30.00th=[ 57], 40.00th=[ 58], 50.00th=[ 58], 60.00th=[ 59], 00:13:46.825 | 70.00th=[ 59], 80.00th=[ 60], 90.00th=[ 61], 95.00th=[ 62], 00:13:46.825 | 99.00th=[ 66], 99.50th=[ 73], 99.90th=[ 100], 99.95th=[ 106], 00:13:46.825 | 99.99th=[ 110] 00:13:46.825 bw ( KiB/s): min=270848, max=285184, per=24.48%, avg=278759.00, stdev=3975.38, samples=20 00:13:46.825 iops : min= 1058, max= 1114, avg=1088.75, stdev=15.59, samples=20 00:13:46.825 lat (msec) : 20=0.16%, 50=0.29%, 100=99.46%, 250=0.09% 00:13:46.825 cpu : usr=1.57%, sys=2.71%, ctx=13978, majf=0, minf=1 00:13:46.825 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:13:46.825 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:46.825 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:46.825 issued rwts: total=0,10952,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:46.825 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:46.825 job7: (groupid=0, jobs=1): err= 0: pid=78976: Fri Dec 13 06:52:50 2024 00:13:46.825 write: IOPS=300, BW=75.1MiB/s (78.7MB/s)(765MiB/10189msec); 0 zone resets 00:13:46.825 slat (usec): min=18, max=57290, avg=3249.36, stdev=6029.28 00:13:46.825 clat (msec): min=13, max=404, avg=209.81, stdev=42.34 00:13:46.825 lat (msec): min=13, max=404, avg=213.06, stdev=42.59 00:13:46.825 clat percentiles (msec): 00:13:46.825 | 1.00th=[ 34], 5.00th=[ 104], 10.00th=[ 201], 20.00th=[ 207], 00:13:46.825 | 30.00th=[ 211], 40.00th=[ 218], 50.00th=[ 220], 60.00th=[ 222], 00:13:46.825 | 70.00th=[ 222], 80.00th=[ 224], 90.00th=[ 230], 95.00th=[ 236], 00:13:46.825 | 99.00th=[ 296], 99.50th=[ 351], 99.90th=[ 393], 99.95th=[ 405], 00:13:46.825 | 99.99th=[ 405] 00:13:46.825 bw ( KiB/s): min=70003, max=125952, per=6.73%, avg=76682.95, stdev=11709.30, samples=20 00:13:46.825 iops : min= 273, max= 492, avg=299.50, stdev=45.76, samples=20 00:13:46.825 lat (msec) : 20=0.39%, 50=1.54%, 100=2.97%, 250=93.59%, 500=1.50% 00:13:46.825 cpu : usr=0.43%, sys=1.03%, ctx=2779, majf=0, minf=1 00:13:46.825 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=97.9% 00:13:46.825 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:46.825 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:46.825 issued rwts: total=0,3059,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:46.825 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:46.825 job8: (groupid=0, jobs=1): err= 0: pid=78977: Fri Dec 13 06:52:50 2024 00:13:46.825 write: IOPS=288, BW=72.2MiB/s (75.7MB/s)(736MiB/10192msec); 0 zone resets 00:13:46.825 slat (usec): min=18, max=82487, avg=3395.71, stdev=6225.41 00:13:46.825 clat (msec): min=24, max=399, avg=218.06, stdev=27.48 00:13:46.825 lat (msec): min=24, max=400, avg=221.45, stdev=27.19 00:13:46.825 clat percentiles (msec): 00:13:46.825 | 1.00th=[ 69], 5.00th=[ 201], 10.00th=[ 205], 20.00th=[ 209], 00:13:46.825 | 30.00th=[ 215], 40.00th=[ 220], 50.00th=[ 222], 60.00th=[ 222], 00:13:46.825 | 70.00th=[ 224], 80.00th=[ 226], 90.00th=[ 236], 95.00th=[ 241], 00:13:46.825 | 99.00th=[ 300], 99.50th=[ 342], 99.90th=[ 388], 99.95th=[ 401], 00:13:46.825 | 99.99th=[ 401] 00:13:46.825 bw ( KiB/s): min=68608, max=77824, per=6.47%, avg=73731.75, stdev=1934.33, samples=20 00:13:46.825 iops : min= 268, max= 304, avg=287.95, stdev= 7.61, samples=20 00:13:46.825 lat (msec) : 50=0.68%, 100=0.82%, 250=96.77%, 500=1.73% 00:13:46.825 cpu : usr=0.33%, sys=0.83%, ctx=3000, majf=0, minf=1 00:13:46.825 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:13:46.825 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:46.825 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:46.825 issued rwts: total=0,2944,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:46.825 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:46.825 job9: (groupid=0, jobs=1): err= 0: pid=78978: Fri Dec 13 06:52:50 2024 00:13:46.825 write: IOPS=406, BW=102MiB/s (107MB/s)(1030MiB/10133msec); 0 zone resets 00:13:46.825 slat (usec): min=16, max=68429, avg=2422.49, stdev=4260.32 00:13:46.825 clat (msec): min=70, max=280, avg=154.93, stdev=10.94 00:13:46.825 lat (msec): min=70, max=280, avg=157.35, stdev=10.23 00:13:46.825 clat percentiles (msec): 00:13:46.825 | 1.00th=[ 142], 5.00th=[ 146], 10.00th=[ 146], 20.00th=[ 150], 00:13:46.825 | 30.00th=[ 155], 40.00th=[ 155], 50.00th=[ 157], 60.00th=[ 157], 00:13:46.825 | 70.00th=[ 159], 80.00th=[ 159], 90.00th=[ 161], 95.00th=[ 161], 00:13:46.825 | 99.00th=[ 194], 99.50th=[ 232], 99.90th=[ 271], 99.95th=[ 271], 00:13:46.825 | 99.99th=[ 279] 00:13:46.825 bw ( KiB/s): min=90112, max=106496, per=9.12%, avg=103827.85, stdev=3444.70, samples=20 00:13:46.825 iops : min= 352, max= 416, avg=405.55, stdev=13.45, samples=20 00:13:46.825 lat (msec) : 100=0.39%, 250=99.27%, 500=0.34% 00:13:46.825 cpu : usr=0.82%, sys=1.14%, ctx=4260, majf=0, minf=1 00:13:46.825 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:13:46.825 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:46.825 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:46.825 issued rwts: total=0,4120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:46.825 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:46.825 job10: (groupid=0, jobs=1): err= 0: pid=78979: Fri Dec 13 06:52:50 2024 00:13:46.825 write: IOPS=290, BW=72.7MiB/s (76.2MB/s)(741MiB/10192msec); 0 zone resets 00:13:46.825 slat (usec): min=18, max=56425, avg=3370.93, stdev=6071.03 00:13:46.825 clat (msec): min=24, max=400, avg=216.53, stdev=27.43 00:13:46.825 lat (msec): min=24, max=400, avg=219.90, stdev=27.15 00:13:46.825 clat percentiles (msec): 00:13:46.825 | 1.00th=[ 69], 5.00th=[ 199], 10.00th=[ 205], 20.00th=[ 209], 00:13:46.825 | 30.00th=[ 213], 40.00th=[ 218], 50.00th=[ 220], 60.00th=[ 222], 00:13:46.825 | 70.00th=[ 224], 80.00th=[ 226], 90.00th=[ 230], 95.00th=[ 239], 00:13:46.825 | 99.00th=[ 305], 99.50th=[ 347], 99.90th=[ 388], 99.95th=[ 401], 00:13:46.825 | 99.99th=[ 401] 00:13:46.825 bw ( KiB/s): min=69771, max=77824, per=6.52%, avg=74291.60, stdev=1979.46, samples=20 00:13:46.825 iops : min= 272, max= 304, avg=289.85, stdev= 7.79, samples=20 00:13:46.825 lat (msec) : 50=0.67%, 100=0.81%, 250=96.96%, 500=1.55% 00:13:46.825 cpu : usr=0.55%, sys=0.91%, ctx=3156, majf=0, minf=1 00:13:46.826 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:13:46.826 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:46.826 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:46.826 issued rwts: total=0,2964,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:46.826 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:46.826 00:13:46.826 Run status group 0 (all jobs): 00:13:46.826 WRITE: bw=1112MiB/s (1166MB/s), 71.5MiB/s-272MiB/s (75.0MB/s-286MB/s), io=11.1GiB (11.9GB), run=10054-10192msec 00:13:46.826 00:13:46.826 Disk stats (read/write): 00:13:46.826 nvme0n1: ios=49/8168, merge=0/0, ticks=34/1212852, in_queue=1212886, util=97.90% 00:13:46.826 nvme10n1: ios=49/5731, merge=0/0, ticks=36/1209470, in_queue=1209506, util=98.09% 00:13:46.826 nvme1n1: ios=35/8176, merge=0/0, ticks=34/1214666, in_queue=1214700, util=98.27% 00:13:46.826 nvme2n1: ios=21/5816, merge=0/0, ticks=27/1210383, in_queue=1210410, util=98.22% 00:13:46.826 nvme3n1: ios=5/5698, merge=0/0, ticks=10/1209445, in_queue=1209455, util=98.18% 00:13:46.826 nvme4n1: ios=13/8235, merge=0/0, ticks=29/1212451, in_queue=1212480, util=98.38% 00:13:46.826 nvme5n1: ios=0/21799, merge=0/0, ticks=0/1219404, in_queue=1219404, util=98.64% 00:13:46.826 nvme6n1: ios=0/5994, merge=0/0, ticks=0/1210157, in_queue=1210157, util=98.58% 00:13:46.826 nvme7n1: ios=0/5760, merge=0/0, ticks=0/1209805, in_queue=1209805, util=98.82% 00:13:46.826 nvme8n1: ios=0/8100, merge=0/0, ticks=0/1212014, in_queue=1212014, util=98.74% 00:13:46.826 nvme9n1: ios=0/5804, merge=0/0, ticks=0/1209776, in_queue=1209776, util=98.96% 00:13:46.826 06:52:50 -- target/multiconnection.sh@36 -- # sync 00:13:46.826 06:52:50 -- target/multiconnection.sh@37 -- # seq 1 11 00:13:46.826 06:52:50 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:46.826 06:52:50 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:46.826 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:46.826 06:52:50 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:13:46.826 06:52:50 -- common/autotest_common.sh@1208 -- # local i=0 00:13:46.826 06:52:50 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:46.826 06:52:50 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK1 00:13:46.826 06:52:50 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:46.826 06:52:50 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:13:46.826 06:52:50 -- common/autotest_common.sh@1220 -- # return 0 00:13:46.826 06:52:50 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:46.826 06:52:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.826 06:52:50 -- common/autotest_common.sh@10 -- # set +x 00:13:46.826 06:52:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.826 06:52:50 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:46.826 06:52:50 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:13:46.826 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:13:46.826 06:52:50 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:13:46.826 06:52:50 -- common/autotest_common.sh@1208 -- # local i=0 00:13:46.826 06:52:50 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:46.826 06:52:50 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK2 00:13:46.826 06:52:50 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:46.826 06:52:50 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:13:46.826 06:52:50 -- common/autotest_common.sh@1220 -- # return 0 00:13:46.826 06:52:50 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:13:46.826 06:52:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.826 06:52:50 -- common/autotest_common.sh@10 -- # set +x 00:13:46.826 06:52:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.826 06:52:50 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:46.826 06:52:50 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:13:46.826 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:13:46.826 06:52:50 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:13:46.826 06:52:50 -- common/autotest_common.sh@1208 -- # local i=0 00:13:46.826 06:52:50 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:46.826 06:52:50 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK3 00:13:46.826 06:52:50 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:46.826 06:52:50 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:13:46.826 06:52:50 -- common/autotest_common.sh@1220 -- # return 0 00:13:46.826 06:52:50 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:13:46.826 06:52:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.826 06:52:50 -- common/autotest_common.sh@10 -- # set +x 00:13:46.826 06:52:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.826 06:52:50 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:46.826 06:52:50 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:13:46.826 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:13:46.826 06:52:50 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:13:46.826 06:52:50 -- common/autotest_common.sh@1208 -- # local i=0 00:13:46.826 06:52:50 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:46.826 06:52:50 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK4 00:13:46.826 06:52:50 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:46.826 06:52:50 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:13:46.826 06:52:50 -- common/autotest_common.sh@1220 -- # return 0 00:13:46.826 06:52:50 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:13:46.826 06:52:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.826 06:52:50 -- common/autotest_common.sh@10 -- # set +x 00:13:46.826 06:52:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.826 06:52:50 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:46.826 06:52:50 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:13:46.826 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:13:46.826 06:52:50 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:13:46.826 06:52:50 -- common/autotest_common.sh@1208 -- # local i=0 00:13:46.826 06:52:50 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:46.826 06:52:50 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK5 00:13:46.826 06:52:50 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:46.826 06:52:50 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:13:46.826 06:52:50 -- common/autotest_common.sh@1220 -- # return 0 00:13:46.826 06:52:50 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:13:46.826 06:52:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.826 06:52:50 -- common/autotest_common.sh@10 -- # set +x 00:13:46.826 06:52:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.826 06:52:50 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:46.826 06:52:50 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:13:46.826 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:13:46.826 06:52:50 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:13:46.826 06:52:50 -- common/autotest_common.sh@1208 -- # local i=0 00:13:46.826 06:52:50 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:46.826 06:52:50 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK6 00:13:46.826 06:52:50 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:13:46.826 06:52:50 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:46.826 06:52:50 -- common/autotest_common.sh@1220 -- # return 0 00:13:46.826 06:52:50 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:13:46.826 06:52:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.826 06:52:50 -- common/autotest_common.sh@10 -- # set +x 00:13:46.826 06:52:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.826 06:52:50 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:46.826 06:52:50 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:13:46.826 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:13:46.826 06:52:50 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:13:46.826 06:52:50 -- common/autotest_common.sh@1208 -- # local i=0 00:13:46.826 06:52:50 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:46.826 06:52:50 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK7 00:13:46.826 06:52:50 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:13:46.826 06:52:50 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:46.826 06:52:50 -- common/autotest_common.sh@1220 -- # return 0 00:13:46.826 06:52:50 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:13:46.826 06:52:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.826 06:52:50 -- common/autotest_common.sh@10 -- # set +x 00:13:46.826 06:52:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.826 06:52:50 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:46.826 06:52:50 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:13:46.826 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:13:46.826 06:52:50 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:13:46.826 06:52:50 -- common/autotest_common.sh@1208 -- # local i=0 00:13:46.826 06:52:50 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:46.826 06:52:50 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK8 00:13:46.826 06:52:50 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:46.826 06:52:50 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:13:46.826 06:52:50 -- common/autotest_common.sh@1220 -- # return 0 00:13:46.826 06:52:50 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:13:46.826 06:52:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.826 06:52:50 -- common/autotest_common.sh@10 -- # set +x 00:13:46.826 06:52:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.826 06:52:50 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:46.826 06:52:50 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:13:46.826 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:13:46.826 06:52:50 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:13:46.826 06:52:50 -- common/autotest_common.sh@1208 -- # local i=0 00:13:46.826 06:52:50 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:46.826 06:52:50 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK9 00:13:46.826 06:52:51 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:13:46.826 06:52:51 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:46.826 06:52:51 -- common/autotest_common.sh@1220 -- # return 0 00:13:46.826 06:52:51 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:13:46.826 06:52:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.826 06:52:51 -- common/autotest_common.sh@10 -- # set +x 00:13:46.826 06:52:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.826 06:52:51 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:46.826 06:52:51 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:13:46.826 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:13:46.827 06:52:51 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:13:46.827 06:52:51 -- common/autotest_common.sh@1208 -- # local i=0 00:13:46.827 06:52:51 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:46.827 06:52:51 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK10 00:13:46.827 06:52:51 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:46.827 06:52:51 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:13:46.827 06:52:51 -- common/autotest_common.sh@1220 -- # return 0 00:13:46.827 06:52:51 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:13:46.827 06:52:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.827 06:52:51 -- common/autotest_common.sh@10 -- # set +x 00:13:46.827 06:52:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.827 06:52:51 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:46.827 06:52:51 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:13:46.827 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:13:46.827 06:52:51 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:13:46.827 06:52:51 -- common/autotest_common.sh@1208 -- # local i=0 00:13:46.827 06:52:51 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:46.827 06:52:51 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK11 00:13:46.827 06:52:51 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:46.827 06:52:51 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:13:46.827 06:52:51 -- common/autotest_common.sh@1220 -- # return 0 00:13:46.827 06:52:51 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:13:46.827 06:52:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.827 06:52:51 -- common/autotest_common.sh@10 -- # set +x 00:13:46.827 06:52:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.827 06:52:51 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:13:46.827 06:52:51 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:13:46.827 06:52:51 -- target/multiconnection.sh@47 -- # nvmftestfini 00:13:46.827 06:52:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:46.827 06:52:51 -- nvmf/common.sh@116 -- # sync 00:13:46.827 06:52:51 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:46.827 06:52:51 -- nvmf/common.sh@119 -- # set +e 00:13:46.827 06:52:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:46.827 06:52:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:46.827 rmmod nvme_tcp 00:13:46.827 rmmod nvme_fabrics 00:13:46.827 rmmod nvme_keyring 00:13:46.827 06:52:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:46.827 06:52:51 -- nvmf/common.sh@123 -- # set -e 00:13:46.827 06:52:51 -- nvmf/common.sh@124 -- # return 0 00:13:46.827 06:52:51 -- nvmf/common.sh@477 -- # '[' -n 78284 ']' 00:13:46.827 06:52:51 -- nvmf/common.sh@478 -- # killprocess 78284 00:13:46.827 06:52:51 -- common/autotest_common.sh@936 -- # '[' -z 78284 ']' 00:13:46.827 06:52:51 -- common/autotest_common.sh@940 -- # kill -0 78284 00:13:46.827 06:52:51 -- common/autotest_common.sh@941 -- # uname 00:13:46.827 06:52:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:46.827 06:52:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78284 00:13:46.827 06:52:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:46.827 06:52:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:46.827 killing process with pid 78284 00:13:46.827 06:52:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78284' 00:13:46.827 06:52:51 -- common/autotest_common.sh@955 -- # kill 78284 00:13:46.827 06:52:51 -- common/autotest_common.sh@960 -- # wait 78284 00:13:47.088 06:52:51 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:47.088 06:52:51 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:47.088 06:52:51 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:47.088 06:52:51 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:47.088 06:52:51 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:47.088 06:52:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:47.088 06:52:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:47.354 06:52:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:47.354 06:52:51 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:47.354 00:13:47.354 real 0m49.010s 00:13:47.354 user 2m40.172s 00:13:47.354 sys 0m35.190s 00:13:47.354 06:52:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:47.354 ************************************ 00:13:47.354 END TEST nvmf_multiconnection 00:13:47.354 ************************************ 00:13:47.354 06:52:51 -- common/autotest_common.sh@10 -- # set +x 00:13:47.354 06:52:51 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:13:47.354 06:52:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:47.354 06:52:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:47.354 06:52:51 -- common/autotest_common.sh@10 -- # set +x 00:13:47.354 ************************************ 00:13:47.354 START TEST nvmf_initiator_timeout 00:13:47.354 ************************************ 00:13:47.354 06:52:51 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:13:47.354 * Looking for test storage... 00:13:47.354 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:47.354 06:52:51 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:47.354 06:52:51 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:47.354 06:52:51 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:47.354 06:52:51 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:47.354 06:52:51 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:47.354 06:52:51 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:47.354 06:52:51 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:47.354 06:52:51 -- scripts/common.sh@335 -- # IFS=.-: 00:13:47.354 06:52:51 -- scripts/common.sh@335 -- # read -ra ver1 00:13:47.354 06:52:51 -- scripts/common.sh@336 -- # IFS=.-: 00:13:47.354 06:52:51 -- scripts/common.sh@336 -- # read -ra ver2 00:13:47.354 06:52:51 -- scripts/common.sh@337 -- # local 'op=<' 00:13:47.354 06:52:51 -- scripts/common.sh@339 -- # ver1_l=2 00:13:47.354 06:52:51 -- scripts/common.sh@340 -- # ver2_l=1 00:13:47.354 06:52:51 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:47.354 06:52:51 -- scripts/common.sh@343 -- # case "$op" in 00:13:47.354 06:52:51 -- scripts/common.sh@344 -- # : 1 00:13:47.354 06:52:51 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:47.354 06:52:51 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:47.354 06:52:51 -- scripts/common.sh@364 -- # decimal 1 00:13:47.354 06:52:51 -- scripts/common.sh@352 -- # local d=1 00:13:47.354 06:52:51 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:47.354 06:52:51 -- scripts/common.sh@354 -- # echo 1 00:13:47.354 06:52:51 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:47.354 06:52:51 -- scripts/common.sh@365 -- # decimal 2 00:13:47.613 06:52:51 -- scripts/common.sh@352 -- # local d=2 00:13:47.613 06:52:51 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:47.613 06:52:51 -- scripts/common.sh@354 -- # echo 2 00:13:47.613 06:52:51 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:47.613 06:52:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:47.613 06:52:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:47.613 06:52:51 -- scripts/common.sh@367 -- # return 0 00:13:47.613 06:52:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:47.613 06:52:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:47.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:47.613 --rc genhtml_branch_coverage=1 00:13:47.613 --rc genhtml_function_coverage=1 00:13:47.613 --rc genhtml_legend=1 00:13:47.613 --rc geninfo_all_blocks=1 00:13:47.613 --rc geninfo_unexecuted_blocks=1 00:13:47.613 00:13:47.613 ' 00:13:47.613 06:52:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:47.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:47.613 --rc genhtml_branch_coverage=1 00:13:47.613 --rc genhtml_function_coverage=1 00:13:47.613 --rc genhtml_legend=1 00:13:47.613 --rc geninfo_all_blocks=1 00:13:47.613 --rc geninfo_unexecuted_blocks=1 00:13:47.613 00:13:47.613 ' 00:13:47.613 06:52:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:47.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:47.613 --rc genhtml_branch_coverage=1 00:13:47.613 --rc genhtml_function_coverage=1 00:13:47.613 --rc genhtml_legend=1 00:13:47.613 --rc geninfo_all_blocks=1 00:13:47.613 --rc geninfo_unexecuted_blocks=1 00:13:47.613 00:13:47.613 ' 00:13:47.613 06:52:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:47.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:47.613 --rc genhtml_branch_coverage=1 00:13:47.613 --rc genhtml_function_coverage=1 00:13:47.613 --rc genhtml_legend=1 00:13:47.613 --rc geninfo_all_blocks=1 00:13:47.613 --rc geninfo_unexecuted_blocks=1 00:13:47.613 00:13:47.613 ' 00:13:47.613 06:52:51 -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:47.613 06:52:51 -- nvmf/common.sh@7 -- # uname -s 00:13:47.613 06:52:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:47.613 06:52:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:47.613 06:52:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:47.613 06:52:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:47.613 06:52:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:47.613 06:52:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:47.613 06:52:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:47.613 06:52:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:47.613 06:52:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:47.613 06:52:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:47.613 06:52:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:657f0c9c-3891-4064-9841-3d87a573b6e7 00:13:47.613 06:52:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=657f0c9c-3891-4064-9841-3d87a573b6e7 00:13:47.613 06:52:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:47.613 06:52:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:47.613 06:52:51 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:47.613 06:52:51 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:47.613 06:52:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:47.613 06:52:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:47.613 06:52:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:47.613 06:52:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.614 06:52:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.614 06:52:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.614 06:52:51 -- paths/export.sh@5 -- # export PATH 00:13:47.614 06:52:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.614 06:52:51 -- nvmf/common.sh@46 -- # : 0 00:13:47.614 06:52:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:47.614 06:52:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:47.614 06:52:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:47.614 06:52:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:47.614 06:52:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:47.614 06:52:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:47.614 06:52:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:47.614 06:52:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:47.614 06:52:51 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:47.614 06:52:51 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:47.614 06:52:51 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:13:47.614 06:52:51 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:47.614 06:52:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:47.614 06:52:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:47.614 06:52:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:47.614 06:52:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:47.614 06:52:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:47.614 06:52:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:47.614 06:52:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:47.614 06:52:51 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:47.614 06:52:51 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:47.614 06:52:51 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:47.614 06:52:51 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:47.614 06:52:51 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:47.614 06:52:51 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:47.614 06:52:51 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:47.614 06:52:51 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:47.614 06:52:51 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:47.614 06:52:51 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:47.614 06:52:51 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:47.614 06:52:51 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:47.614 06:52:51 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:47.614 06:52:51 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:47.614 06:52:51 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:47.614 06:52:51 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:47.614 06:52:51 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:47.614 06:52:51 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:47.614 06:52:51 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:47.614 06:52:51 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:47.614 Cannot find device "nvmf_tgt_br" 00:13:47.614 06:52:51 -- nvmf/common.sh@154 -- # true 00:13:47.614 06:52:51 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:47.614 Cannot find device "nvmf_tgt_br2" 00:13:47.614 06:52:51 -- nvmf/common.sh@155 -- # true 00:13:47.614 06:52:51 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:47.614 06:52:51 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:47.614 Cannot find device "nvmf_tgt_br" 00:13:47.614 06:52:51 -- nvmf/common.sh@157 -- # true 00:13:47.614 06:52:51 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:47.614 Cannot find device "nvmf_tgt_br2" 00:13:47.614 06:52:51 -- nvmf/common.sh@158 -- # true 00:13:47.614 06:52:51 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:47.614 06:52:52 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:47.614 06:52:52 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:47.614 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:47.614 06:52:52 -- nvmf/common.sh@161 -- # true 00:13:47.614 06:52:52 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:47.614 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:47.614 06:52:52 -- nvmf/common.sh@162 -- # true 00:13:47.614 06:52:52 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:47.614 06:52:52 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:47.614 06:52:52 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:47.614 06:52:52 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:47.614 06:52:52 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:47.614 06:52:52 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:47.614 06:52:52 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:47.614 06:52:52 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:47.614 06:52:52 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:47.614 06:52:52 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:47.614 06:52:52 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:47.614 06:52:52 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:47.614 06:52:52 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:47.614 06:52:52 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:47.873 06:52:52 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:47.874 06:52:52 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:47.874 06:52:52 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:47.874 06:52:52 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:47.874 06:52:52 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:47.874 06:52:52 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:47.874 06:52:52 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:47.874 06:52:52 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:47.874 06:52:52 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:47.874 06:52:52 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:47.874 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:47.874 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:13:47.874 00:13:47.874 --- 10.0.0.2 ping statistics --- 00:13:47.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:47.874 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:13:47.874 06:52:52 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:47.874 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:47.874 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:13:47.874 00:13:47.874 --- 10.0.0.3 ping statistics --- 00:13:47.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:47.874 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:13:47.874 06:52:52 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:47.874 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:47.874 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:13:47.874 00:13:47.874 --- 10.0.0.1 ping statistics --- 00:13:47.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:47.874 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:13:47.874 06:52:52 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:47.874 06:52:52 -- nvmf/common.sh@421 -- # return 0 00:13:47.874 06:52:52 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:47.874 06:52:52 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:47.874 06:52:52 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:47.874 06:52:52 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:47.874 06:52:52 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:47.874 06:52:52 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:47.874 06:52:52 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:47.874 06:52:52 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:13:47.874 06:52:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:47.874 06:52:52 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:47.874 06:52:52 -- common/autotest_common.sh@10 -- # set +x 00:13:47.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:47.874 06:52:52 -- nvmf/common.sh@469 -- # nvmfpid=79357 00:13:47.874 06:52:52 -- nvmf/common.sh@470 -- # waitforlisten 79357 00:13:47.874 06:52:52 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:47.874 06:52:52 -- common/autotest_common.sh@829 -- # '[' -z 79357 ']' 00:13:47.874 06:52:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:47.874 06:52:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:47.874 06:52:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:47.874 06:52:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:47.874 06:52:52 -- common/autotest_common.sh@10 -- # set +x 00:13:47.874 [2024-12-13 06:52:52.283491] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:47.874 [2024-12-13 06:52:52.283576] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:48.133 [2024-12-13 06:52:52.425506] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:48.133 [2024-12-13 06:52:52.457852] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:48.133 [2024-12-13 06:52:52.458245] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:48.133 [2024-12-13 06:52:52.458298] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:48.133 [2024-12-13 06:52:52.458463] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:48.133 [2024-12-13 06:52:52.458825] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:48.133 [2024-12-13 06:52:52.458970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:48.133 [2024-12-13 06:52:52.459044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:48.133 [2024-12-13 06:52:52.459048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:49.068 06:52:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:49.068 06:52:53 -- common/autotest_common.sh@862 -- # return 0 00:13:49.068 06:52:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:49.068 06:52:53 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:49.068 06:52:53 -- common/autotest_common.sh@10 -- # set +x 00:13:49.068 06:52:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:49.068 06:52:53 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:49.068 06:52:53 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:49.068 06:52:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.068 06:52:53 -- common/autotest_common.sh@10 -- # set +x 00:13:49.068 Malloc0 00:13:49.068 06:52:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.068 06:52:53 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:13:49.068 06:52:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.068 06:52:53 -- common/autotest_common.sh@10 -- # set +x 00:13:49.068 Delay0 00:13:49.068 06:52:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.068 06:52:53 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:49.068 06:52:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.068 06:52:53 -- common/autotest_common.sh@10 -- # set +x 00:13:49.068 [2024-12-13 06:52:53.333644] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:49.068 06:52:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.068 06:52:53 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:49.068 06:52:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.068 06:52:53 -- common/autotest_common.sh@10 -- # set +x 00:13:49.068 06:52:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.068 06:52:53 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:49.069 06:52:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.069 06:52:53 -- common/autotest_common.sh@10 -- # set +x 00:13:49.069 06:52:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.069 06:52:53 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:49.069 06:52:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.069 06:52:53 -- common/autotest_common.sh@10 -- # set +x 00:13:49.069 [2024-12-13 06:52:53.361847] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:49.069 06:52:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.069 06:52:53 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:657f0c9c-3891-4064-9841-3d87a573b6e7 --hostid=657f0c9c-3891-4064-9841-3d87a573b6e7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:49.069 06:52:53 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:13:49.069 06:52:53 -- common/autotest_common.sh@1187 -- # local i=0 00:13:49.069 06:52:53 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:13:49.069 06:52:53 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:13:49.069 06:52:53 -- common/autotest_common.sh@1194 -- # sleep 2 00:13:51.600 06:52:55 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:13:51.600 06:52:55 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:13:51.600 06:52:55 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:13:51.600 06:52:55 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:13:51.600 06:52:55 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:13:51.600 06:52:55 -- common/autotest_common.sh@1197 -- # return 0 00:13:51.600 06:52:55 -- target/initiator_timeout.sh@35 -- # fio_pid=79421 00:13:51.600 06:52:55 -- target/initiator_timeout.sh@37 -- # sleep 3 00:13:51.600 06:52:55 -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:13:51.600 [global] 00:13:51.600 thread=1 00:13:51.600 invalidate=1 00:13:51.600 rw=write 00:13:51.600 time_based=1 00:13:51.600 runtime=60 00:13:51.600 ioengine=libaio 00:13:51.600 direct=1 00:13:51.600 bs=4096 00:13:51.600 iodepth=1 00:13:51.600 norandommap=0 00:13:51.600 numjobs=1 00:13:51.600 00:13:51.600 verify_dump=1 00:13:51.600 verify_backlog=512 00:13:51.600 verify_state_save=0 00:13:51.600 do_verify=1 00:13:51.600 verify=crc32c-intel 00:13:51.600 [job0] 00:13:51.600 filename=/dev/nvme0n1 00:13:51.600 Could not set queue depth (nvme0n1) 00:13:51.600 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:51.600 fio-3.35 00:13:51.600 Starting 1 thread 00:13:54.135 06:52:58 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:13:54.135 06:52:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.135 06:52:58 -- common/autotest_common.sh@10 -- # set +x 00:13:54.135 true 00:13:54.135 06:52:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.135 06:52:58 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:13:54.135 06:52:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.135 06:52:58 -- common/autotest_common.sh@10 -- # set +x 00:13:54.135 true 00:13:54.135 06:52:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.135 06:52:58 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:13:54.136 06:52:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.136 06:52:58 -- common/autotest_common.sh@10 -- # set +x 00:13:54.136 true 00:13:54.136 06:52:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.136 06:52:58 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:13:54.136 06:52:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.136 06:52:58 -- common/autotest_common.sh@10 -- # set +x 00:13:54.136 true 00:13:54.136 06:52:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.136 06:52:58 -- target/initiator_timeout.sh@45 -- # sleep 3 00:13:57.422 06:53:01 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:13:57.422 06:53:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.422 06:53:01 -- common/autotest_common.sh@10 -- # set +x 00:13:57.422 true 00:13:57.422 06:53:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.422 06:53:01 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:13:57.422 06:53:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.422 06:53:01 -- common/autotest_common.sh@10 -- # set +x 00:13:57.422 true 00:13:57.422 06:53:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.422 06:53:01 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:13:57.422 06:53:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.422 06:53:01 -- common/autotest_common.sh@10 -- # set +x 00:13:57.422 true 00:13:57.422 06:53:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.422 06:53:01 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:13:57.422 06:53:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.422 06:53:01 -- common/autotest_common.sh@10 -- # set +x 00:13:57.422 true 00:13:57.422 06:53:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.422 06:53:01 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:13:57.422 06:53:01 -- target/initiator_timeout.sh@54 -- # wait 79421 00:14:53.660 00:14:53.660 job0: (groupid=0, jobs=1): err= 0: pid=79449: Fri Dec 13 06:53:55 2024 00:14:53.660 read: IOPS=776, BW=3106KiB/s (3181kB/s)(182MiB/60000msec) 00:14:53.660 slat (usec): min=11, max=10405, avg=15.20, stdev=60.20 00:14:53.660 clat (usec): min=155, max=40421k, avg=1078.48, stdev=187264.00 00:14:53.660 lat (usec): min=168, max=40421k, avg=1093.69, stdev=187264.00 00:14:53.660 clat percentiles (usec): 00:14:53.660 | 1.00th=[ 165], 5.00th=[ 176], 10.00th=[ 182], 20.00th=[ 190], 00:14:53.660 | 30.00th=[ 196], 40.00th=[ 202], 50.00th=[ 208], 60.00th=[ 215], 00:14:53.660 | 70.00th=[ 223], 80.00th=[ 233], 90.00th=[ 245], 95.00th=[ 255], 00:14:53.660 | 99.00th=[ 277], 99.50th=[ 289], 99.90th=[ 310], 99.95th=[ 326], 00:14:53.660 | 99.99th=[ 469] 00:14:53.660 write: IOPS=782, BW=3128KiB/s (3203kB/s)(183MiB/60000msec); 0 zone resets 00:14:53.660 slat (usec): min=14, max=575, avg=22.53, stdev= 7.34 00:14:53.660 clat (usec): min=116, max=642, avg=166.75, stdev=22.27 00:14:53.660 lat (usec): min=135, max=747, avg=189.28, stdev=23.56 00:14:53.660 clat percentiles (usec): 00:14:53.660 | 1.00th=[ 127], 5.00th=[ 137], 10.00th=[ 141], 20.00th=[ 149], 00:14:53.660 | 30.00th=[ 153], 40.00th=[ 159], 50.00th=[ 165], 60.00th=[ 169], 00:14:53.660 | 70.00th=[ 176], 80.00th=[ 184], 90.00th=[ 198], 95.00th=[ 208], 00:14:53.660 | 99.00th=[ 227], 99.50th=[ 241], 99.90th=[ 258], 99.95th=[ 269], 00:14:53.660 | 99.99th=[ 310] 00:14:53.660 bw ( KiB/s): min= 4096, max=11424, per=100.00%, avg=9401.64, stdev=1447.50, samples=39 00:14:53.660 iops : min= 1024, max= 2856, avg=2350.41, stdev=361.87, samples=39 00:14:53.660 lat (usec) : 250=96.45%, 500=3.55%, 750=0.01% 00:14:53.660 lat (msec) : >=2000=0.01% 00:14:53.660 cpu : usr=0.52%, sys=2.33%, ctx=93526, majf=0, minf=5 00:14:53.660 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:53.660 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:53.660 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:53.660 issued rwts: total=46592,46924,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:53.660 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:53.660 00:14:53.660 Run status group 0 (all jobs): 00:14:53.660 READ: bw=3106KiB/s (3181kB/s), 3106KiB/s-3106KiB/s (3181kB/s-3181kB/s), io=182MiB (191MB), run=60000-60000msec 00:14:53.660 WRITE: bw=3128KiB/s (3203kB/s), 3128KiB/s-3128KiB/s (3203kB/s-3203kB/s), io=183MiB (192MB), run=60000-60000msec 00:14:53.660 00:14:53.660 Disk stats (read/write): 00:14:53.660 nvme0n1: ios=46671/46592, merge=0/0, ticks=10502/8659, in_queue=19161, util=99.87% 00:14:53.660 06:53:55 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:53.660 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:53.660 06:53:55 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:53.660 06:53:55 -- common/autotest_common.sh@1208 -- # local i=0 00:14:53.660 06:53:55 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:14:53.660 06:53:55 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:53.660 06:53:55 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:53.660 06:53:55 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:14:53.660 06:53:55 -- common/autotest_common.sh@1220 -- # return 0 00:14:53.660 06:53:55 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:14:53.660 nvmf hotplug test: fio successful as expected 00:14:53.660 06:53:55 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:14:53.660 06:53:55 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:53.660 06:53:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.660 06:53:55 -- common/autotest_common.sh@10 -- # set +x 00:14:53.661 06:53:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.661 06:53:55 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:14:53.661 06:53:55 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:14:53.661 06:53:55 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:14:53.661 06:53:55 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:53.661 06:53:55 -- nvmf/common.sh@116 -- # sync 00:14:53.661 06:53:55 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:53.661 06:53:55 -- nvmf/common.sh@119 -- # set +e 00:14:53.661 06:53:55 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:53.661 06:53:55 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:53.661 rmmod nvme_tcp 00:14:53.661 rmmod nvme_fabrics 00:14:53.661 rmmod nvme_keyring 00:14:53.661 06:53:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:53.661 06:53:55 -- nvmf/common.sh@123 -- # set -e 00:14:53.661 06:53:55 -- nvmf/common.sh@124 -- # return 0 00:14:53.661 06:53:55 -- nvmf/common.sh@477 -- # '[' -n 79357 ']' 00:14:53.661 06:53:55 -- nvmf/common.sh@478 -- # killprocess 79357 00:14:53.661 06:53:55 -- common/autotest_common.sh@936 -- # '[' -z 79357 ']' 00:14:53.661 06:53:55 -- common/autotest_common.sh@940 -- # kill -0 79357 00:14:53.661 06:53:55 -- common/autotest_common.sh@941 -- # uname 00:14:53.661 06:53:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:53.661 06:53:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79357 00:14:53.661 killing process with pid 79357 00:14:53.661 06:53:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:53.661 06:53:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:53.661 06:53:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79357' 00:14:53.661 06:53:56 -- common/autotest_common.sh@955 -- # kill 79357 00:14:53.661 06:53:56 -- common/autotest_common.sh@960 -- # wait 79357 00:14:53.661 06:53:56 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:53.661 06:53:56 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:53.661 06:53:56 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:53.661 06:53:56 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:53.661 06:53:56 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:53.661 06:53:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:53.661 06:53:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:53.661 06:53:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:53.661 06:53:56 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:53.661 ************************************ 00:14:53.661 END TEST nvmf_initiator_timeout 00:14:53.661 ************************************ 00:14:53.661 00:14:53.661 real 1m4.515s 00:14:53.661 user 3m52.758s 00:14:53.661 sys 0m22.414s 00:14:53.661 06:53:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:53.661 06:53:56 -- common/autotest_common.sh@10 -- # set +x 00:14:53.661 06:53:56 -- nvmf/nvmf.sh@69 -- # [[ virt == phy ]] 00:14:53.661 06:53:56 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:14:53.661 06:53:56 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:53.661 06:53:56 -- common/autotest_common.sh@10 -- # set +x 00:14:53.661 06:53:56 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:14:53.661 06:53:56 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:53.661 06:53:56 -- common/autotest_common.sh@10 -- # set +x 00:14:53.661 06:53:56 -- nvmf/nvmf.sh@90 -- # [[ 1 -eq 0 ]] 00:14:53.661 06:53:56 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:14:53.661 06:53:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:53.661 06:53:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:53.661 06:53:56 -- common/autotest_common.sh@10 -- # set +x 00:14:53.661 ************************************ 00:14:53.661 START TEST nvmf_identify 00:14:53.661 ************************************ 00:14:53.661 06:53:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:14:53.661 * Looking for test storage... 00:14:53.661 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:53.661 06:53:56 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:53.661 06:53:56 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:53.661 06:53:56 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:53.661 06:53:56 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:53.661 06:53:56 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:53.661 06:53:56 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:53.661 06:53:56 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:53.661 06:53:56 -- scripts/common.sh@335 -- # IFS=.-: 00:14:53.661 06:53:56 -- scripts/common.sh@335 -- # read -ra ver1 00:14:53.661 06:53:56 -- scripts/common.sh@336 -- # IFS=.-: 00:14:53.661 06:53:56 -- scripts/common.sh@336 -- # read -ra ver2 00:14:53.661 06:53:56 -- scripts/common.sh@337 -- # local 'op=<' 00:14:53.661 06:53:56 -- scripts/common.sh@339 -- # ver1_l=2 00:14:53.661 06:53:56 -- scripts/common.sh@340 -- # ver2_l=1 00:14:53.661 06:53:56 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:53.661 06:53:56 -- scripts/common.sh@343 -- # case "$op" in 00:14:53.661 06:53:56 -- scripts/common.sh@344 -- # : 1 00:14:53.661 06:53:56 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:53.661 06:53:56 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:53.661 06:53:56 -- scripts/common.sh@364 -- # decimal 1 00:14:53.661 06:53:56 -- scripts/common.sh@352 -- # local d=1 00:14:53.661 06:53:56 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:53.661 06:53:56 -- scripts/common.sh@354 -- # echo 1 00:14:53.661 06:53:56 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:53.661 06:53:56 -- scripts/common.sh@365 -- # decimal 2 00:14:53.661 06:53:56 -- scripts/common.sh@352 -- # local d=2 00:14:53.661 06:53:56 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:53.661 06:53:56 -- scripts/common.sh@354 -- # echo 2 00:14:53.661 06:53:56 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:53.661 06:53:56 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:53.661 06:53:56 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:53.661 06:53:56 -- scripts/common.sh@367 -- # return 0 00:14:53.661 06:53:56 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:53.661 06:53:56 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:53.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.661 --rc genhtml_branch_coverage=1 00:14:53.661 --rc genhtml_function_coverage=1 00:14:53.661 --rc genhtml_legend=1 00:14:53.661 --rc geninfo_all_blocks=1 00:14:53.661 --rc geninfo_unexecuted_blocks=1 00:14:53.661 00:14:53.661 ' 00:14:53.661 06:53:56 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:53.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.661 --rc genhtml_branch_coverage=1 00:14:53.661 --rc genhtml_function_coverage=1 00:14:53.661 --rc genhtml_legend=1 00:14:53.661 --rc geninfo_all_blocks=1 00:14:53.661 --rc geninfo_unexecuted_blocks=1 00:14:53.661 00:14:53.661 ' 00:14:53.661 06:53:56 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:53.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.661 --rc genhtml_branch_coverage=1 00:14:53.661 --rc genhtml_function_coverage=1 00:14:53.661 --rc genhtml_legend=1 00:14:53.661 --rc geninfo_all_blocks=1 00:14:53.661 --rc geninfo_unexecuted_blocks=1 00:14:53.661 00:14:53.661 ' 00:14:53.661 06:53:56 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:53.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.661 --rc genhtml_branch_coverage=1 00:14:53.661 --rc genhtml_function_coverage=1 00:14:53.661 --rc genhtml_legend=1 00:14:53.661 --rc geninfo_all_blocks=1 00:14:53.661 --rc geninfo_unexecuted_blocks=1 00:14:53.661 00:14:53.661 ' 00:14:53.661 06:53:56 -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:53.661 06:53:56 -- nvmf/common.sh@7 -- # uname -s 00:14:53.661 06:53:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:53.661 06:53:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:53.661 06:53:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:53.661 06:53:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:53.661 06:53:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:53.661 06:53:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:53.661 06:53:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:53.661 06:53:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:53.661 06:53:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:53.661 06:53:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:53.661 06:53:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:657f0c9c-3891-4064-9841-3d87a573b6e7 00:14:53.661 06:53:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=657f0c9c-3891-4064-9841-3d87a573b6e7 00:14:53.661 06:53:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:53.661 06:53:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:53.661 06:53:56 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:53.661 06:53:56 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:53.661 06:53:56 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:53.661 06:53:56 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:53.661 06:53:56 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:53.661 06:53:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.662 06:53:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.662 06:53:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.662 06:53:56 -- paths/export.sh@5 -- # export PATH 00:14:53.662 06:53:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.662 06:53:56 -- nvmf/common.sh@46 -- # : 0 00:14:53.662 06:53:56 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:53.662 06:53:56 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:53.662 06:53:56 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:53.662 06:53:56 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:53.662 06:53:56 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:53.662 06:53:56 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:53.662 06:53:56 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:53.662 06:53:56 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:53.662 06:53:56 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:53.662 06:53:56 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:53.662 06:53:56 -- host/identify.sh@14 -- # nvmftestinit 00:14:53.662 06:53:56 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:53.662 06:53:56 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:53.662 06:53:56 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:53.662 06:53:56 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:53.662 06:53:56 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:53.662 06:53:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:53.662 06:53:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:53.662 06:53:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:53.662 06:53:56 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:53.662 06:53:56 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:53.662 06:53:56 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:53.662 06:53:56 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:53.662 06:53:56 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:53.662 06:53:56 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:53.662 06:53:56 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:53.662 06:53:56 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:53.662 06:53:56 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:53.662 06:53:56 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:53.662 06:53:56 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:53.662 06:53:56 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:53.662 06:53:56 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:53.662 06:53:56 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:53.662 06:53:56 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:53.662 06:53:56 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:53.662 06:53:56 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:53.662 06:53:56 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:53.662 06:53:56 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:53.662 06:53:56 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:53.662 Cannot find device "nvmf_tgt_br" 00:14:53.662 06:53:56 -- nvmf/common.sh@154 -- # true 00:14:53.662 06:53:56 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:53.662 Cannot find device "nvmf_tgt_br2" 00:14:53.662 06:53:56 -- nvmf/common.sh@155 -- # true 00:14:53.662 06:53:56 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:53.662 06:53:56 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:53.662 Cannot find device "nvmf_tgt_br" 00:14:53.662 06:53:56 -- nvmf/common.sh@157 -- # true 00:14:53.662 06:53:56 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:53.662 Cannot find device "nvmf_tgt_br2" 00:14:53.662 06:53:56 -- nvmf/common.sh@158 -- # true 00:14:53.662 06:53:56 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:53.662 06:53:56 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:53.662 06:53:56 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:53.662 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:53.662 06:53:56 -- nvmf/common.sh@161 -- # true 00:14:53.662 06:53:56 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:53.662 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:53.662 06:53:56 -- nvmf/common.sh@162 -- # true 00:14:53.662 06:53:56 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:53.662 06:53:56 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:53.662 06:53:56 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:53.662 06:53:56 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:53.662 06:53:56 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:53.662 06:53:56 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:53.662 06:53:56 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:53.662 06:53:56 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:53.662 06:53:56 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:53.662 06:53:56 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:53.662 06:53:56 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:53.662 06:53:56 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:53.662 06:53:56 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:53.662 06:53:56 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:53.662 06:53:56 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:53.662 06:53:56 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:53.662 06:53:56 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:53.662 06:53:56 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:53.662 06:53:56 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:53.662 06:53:56 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:53.662 06:53:56 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:53.662 06:53:56 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:53.662 06:53:56 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:53.662 06:53:56 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:53.662 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:53.662 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:14:53.662 00:14:53.662 --- 10.0.0.2 ping statistics --- 00:14:53.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:53.662 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:14:53.662 06:53:56 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:53.662 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:53.662 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:14:53.662 00:14:53.662 --- 10.0.0.3 ping statistics --- 00:14:53.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:53.662 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:14:53.662 06:53:56 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:53.662 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:53.662 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:14:53.662 00:14:53.662 --- 10.0.0.1 ping statistics --- 00:14:53.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:53.662 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:14:53.662 06:53:56 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:53.662 06:53:56 -- nvmf/common.sh@421 -- # return 0 00:14:53.662 06:53:56 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:53.662 06:53:56 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:53.662 06:53:56 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:53.662 06:53:56 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:53.662 06:53:56 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:53.662 06:53:56 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:53.662 06:53:56 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:53.662 06:53:56 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:14:53.662 06:53:56 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:53.662 06:53:56 -- common/autotest_common.sh@10 -- # set +x 00:14:53.662 06:53:56 -- host/identify.sh@19 -- # nvmfpid=80293 00:14:53.662 06:53:56 -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:53.662 06:53:56 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:53.662 06:53:56 -- host/identify.sh@23 -- # waitforlisten 80293 00:14:53.662 06:53:56 -- common/autotest_common.sh@829 -- # '[' -z 80293 ']' 00:14:53.662 06:53:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:53.662 06:53:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:53.662 06:53:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:53.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:53.662 06:53:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:53.662 06:53:56 -- common/autotest_common.sh@10 -- # set +x 00:14:53.662 [2024-12-13 06:53:56.894619] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:53.662 [2024-12-13 06:53:56.894714] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:53.662 [2024-12-13 06:53:57.032322] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:53.662 [2024-12-13 06:53:57.067508] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:53.662 [2024-12-13 06:53:57.067673] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:53.663 [2024-12-13 06:53:57.067687] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:53.663 [2024-12-13 06:53:57.067695] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:53.663 [2024-12-13 06:53:57.067900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:53.663 [2024-12-13 06:53:57.068064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:53.663 [2024-12-13 06:53:57.068167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:53.663 [2024-12-13 06:53:57.068169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:53.663 06:53:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:53.663 06:53:57 -- common/autotest_common.sh@862 -- # return 0 00:14:53.663 06:53:57 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:53.663 06:53:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.663 06:53:57 -- common/autotest_common.sh@10 -- # set +x 00:14:53.663 [2024-12-13 06:53:57.158296] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:53.663 06:53:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.663 06:53:57 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:14:53.663 06:53:57 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:53.663 06:53:57 -- common/autotest_common.sh@10 -- # set +x 00:14:53.663 06:53:57 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:53.663 06:53:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.663 06:53:57 -- common/autotest_common.sh@10 -- # set +x 00:14:53.663 Malloc0 00:14:53.663 06:53:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.663 06:53:57 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:53.663 06:53:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.663 06:53:57 -- common/autotest_common.sh@10 -- # set +x 00:14:53.663 06:53:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.663 06:53:57 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:14:53.663 06:53:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.663 06:53:57 -- common/autotest_common.sh@10 -- # set +x 00:14:53.663 06:53:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.663 06:53:57 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:53.663 06:53:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.663 06:53:57 -- common/autotest_common.sh@10 -- # set +x 00:14:53.663 [2024-12-13 06:53:57.254245] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:53.663 06:53:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.663 06:53:57 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:53.663 06:53:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.663 06:53:57 -- common/autotest_common.sh@10 -- # set +x 00:14:53.663 06:53:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.663 06:53:57 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:14:53.663 06:53:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.663 06:53:57 -- common/autotest_common.sh@10 -- # set +x 00:14:53.663 [2024-12-13 06:53:57.270035] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:14:53.663 [ 00:14:53.663 { 00:14:53.663 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:53.663 "subtype": "Discovery", 00:14:53.663 "listen_addresses": [ 00:14:53.663 { 00:14:53.663 "transport": "TCP", 00:14:53.663 "trtype": "TCP", 00:14:53.663 "adrfam": "IPv4", 00:14:53.663 "traddr": "10.0.0.2", 00:14:53.663 "trsvcid": "4420" 00:14:53.663 } 00:14:53.663 ], 00:14:53.663 "allow_any_host": true, 00:14:53.663 "hosts": [] 00:14:53.663 }, 00:14:53.663 { 00:14:53.663 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:53.663 "subtype": "NVMe", 00:14:53.663 "listen_addresses": [ 00:14:53.663 { 00:14:53.663 "transport": "TCP", 00:14:53.663 "trtype": "TCP", 00:14:53.663 "adrfam": "IPv4", 00:14:53.663 "traddr": "10.0.0.2", 00:14:53.663 "trsvcid": "4420" 00:14:53.663 } 00:14:53.663 ], 00:14:53.663 "allow_any_host": true, 00:14:53.663 "hosts": [], 00:14:53.663 "serial_number": "SPDK00000000000001", 00:14:53.663 "model_number": "SPDK bdev Controller", 00:14:53.663 "max_namespaces": 32, 00:14:53.663 "min_cntlid": 1, 00:14:53.663 "max_cntlid": 65519, 00:14:53.663 "namespaces": [ 00:14:53.663 { 00:14:53.663 "nsid": 1, 00:14:53.663 "bdev_name": "Malloc0", 00:14:53.663 "name": "Malloc0", 00:14:53.663 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:14:53.663 "eui64": "ABCDEF0123456789", 00:14:53.663 "uuid": "62041565-2aae-4a8e-bf98-f8369d22019c" 00:14:53.663 } 00:14:53.663 ] 00:14:53.663 } 00:14:53.663 ] 00:14:53.663 06:53:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.663 06:53:57 -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:14:53.663 [2024-12-13 06:53:57.304931] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:53.663 [2024-12-13 06:53:57.305114] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80321 ] 00:14:53.663 [2024-12-13 06:53:57.443237] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:14:53.663 [2024-12-13 06:53:57.443341] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:14:53.663 [2024-12-13 06:53:57.443364] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:14:53.663 [2024-12-13 06:53:57.443379] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:14:53.663 [2024-12-13 06:53:57.443392] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl uring 00:14:53.663 [2024-12-13 06:53:57.443521] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:14:53.663 [2024-12-13 06:53:57.443597] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1e9e510 0 00:14:53.663 [2024-12-13 06:53:57.455448] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:14:53.663 [2024-12-13 06:53:57.455474] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:14:53.663 [2024-12-13 06:53:57.455480] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:14:53.663 [2024-12-13 06:53:57.455485] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:14:53.663 [2024-12-13 06:53:57.455529] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.663 [2024-12-13 06:53:57.455536] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.663 [2024-12-13 06:53:57.455540] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e9e510) 00:14:53.663 [2024-12-13 06:53:57.455555] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:53.663 [2024-12-13 06:53:57.455604] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eea8a0, cid 0, qid 0 00:14:53.663 [2024-12-13 06:53:57.463456] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.663 [2024-12-13 06:53:57.463495] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.663 [2024-12-13 06:53:57.463516] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.663 [2024-12-13 06:53:57.463521] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1eea8a0) on tqpair=0x1e9e510 00:14:53.663 [2024-12-13 06:53:57.463538] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:14:53.663 [2024-12-13 06:53:57.463546] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:14:53.663 [2024-12-13 06:53:57.463553] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:14:53.663 [2024-12-13 06:53:57.463570] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.663 [2024-12-13 06:53:57.463575] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.663 [2024-12-13 06:53:57.463580] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e9e510) 00:14:53.663 [2024-12-13 06:53:57.463590] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.663 [2024-12-13 06:53:57.463618] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eea8a0, cid 0, qid 0 00:14:53.663 [2024-12-13 06:53:57.463697] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.663 [2024-12-13 06:53:57.463720] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.663 [2024-12-13 06:53:57.463724] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.663 [2024-12-13 06:53:57.463743] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1eea8a0) on tqpair=0x1e9e510 00:14:53.663 [2024-12-13 06:53:57.463750] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:14:53.663 [2024-12-13 06:53:57.463774] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:14:53.663 [2024-12-13 06:53:57.463798] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.663 [2024-12-13 06:53:57.463802] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.663 [2024-12-13 06:53:57.463806] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e9e510) 00:14:53.663 [2024-12-13 06:53:57.463814] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.663 [2024-12-13 06:53:57.463833] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eea8a0, cid 0, qid 0 00:14:53.663 [2024-12-13 06:53:57.463918] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.663 [2024-12-13 06:53:57.463925] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.663 [2024-12-13 06:53:57.463930] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.663 [2024-12-13 06:53:57.463934] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1eea8a0) on tqpair=0x1e9e510 00:14:53.663 [2024-12-13 06:53:57.463941] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:14:53.663 [2024-12-13 06:53:57.463950] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:14:53.663 [2024-12-13 06:53:57.463958] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.663 [2024-12-13 06:53:57.463962] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.663 [2024-12-13 06:53:57.463966] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e9e510) 00:14:53.664 [2024-12-13 06:53:57.463974] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.664 [2024-12-13 06:53:57.463993] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eea8a0, cid 0, qid 0 00:14:53.664 [2024-12-13 06:53:57.464045] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.664 [2024-12-13 06:53:57.464051] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.664 [2024-12-13 06:53:57.464055] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.664 [2024-12-13 06:53:57.464060] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1eea8a0) on tqpair=0x1e9e510 00:14:53.664 [2024-12-13 06:53:57.464067] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:53.664 [2024-12-13 06:53:57.464077] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.664 [2024-12-13 06:53:57.464082] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.664 [2024-12-13 06:53:57.464086] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e9e510) 00:14:53.664 [2024-12-13 06:53:57.464094] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.664 [2024-12-13 06:53:57.464110] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eea8a0, cid 0, qid 0 00:14:53.664 [2024-12-13 06:53:57.464154] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.664 [2024-12-13 06:53:57.464161] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.664 [2024-12-13 06:53:57.464165] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.664 [2024-12-13 06:53:57.464169] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1eea8a0) on tqpair=0x1e9e510 00:14:53.664 [2024-12-13 06:53:57.464175] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:14:53.664 [2024-12-13 06:53:57.464181] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:14:53.664 [2024-12-13 06:53:57.464189] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:53.664 [2024-12-13 06:53:57.464295] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:14:53.664 [2024-12-13 06:53:57.464301] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:53.664 [2024-12-13 06:53:57.464310] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.664 [2024-12-13 06:53:57.464315] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.664 [2024-12-13 06:53:57.464319] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e9e510) 00:14:53.664 [2024-12-13 06:53:57.464326] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.664 [2024-12-13 06:53:57.464344] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eea8a0, cid 0, qid 0 00:14:53.664 [2024-12-13 06:53:57.464416] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.664 [2024-12-13 06:53:57.464424] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.664 [2024-12-13 06:53:57.464428] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.664 [2024-12-13 06:53:57.464433] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1eea8a0) on tqpair=0x1e9e510 00:14:53.664 [2024-12-13 06:53:57.464439] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:53.664 [2024-12-13 06:53:57.464450] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.664 [2024-12-13 06:53:57.464454] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.664 [2024-12-13 06:53:57.464458] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e9e510) 00:14:53.664 [2024-12-13 06:53:57.464466] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.664 [2024-12-13 06:53:57.464485] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eea8a0, cid 0, qid 0 00:14:53.664 [2024-12-13 06:53:57.464531] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.664 [2024-12-13 06:53:57.464538] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.664 [2024-12-13 06:53:57.464542] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.664 [2024-12-13 06:53:57.464546] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1eea8a0) on tqpair=0x1e9e510 00:14:53.664 [2024-12-13 06:53:57.464552] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:53.664 [2024-12-13 06:53:57.464557] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:14:53.664 [2024-12-13 06:53:57.464566] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:14:53.664 [2024-12-13 06:53:57.464582] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:14:53.664 [2024-12-13 06:53:57.464593] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.664 [2024-12-13 06:53:57.464598] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.664 [2024-12-13 06:53:57.464602] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e9e510) 00:14:53.664 [2024-12-13 06:53:57.464610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.664 [2024-12-13 06:53:57.464629] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eea8a0, cid 0, qid 0 00:14:53.664 [2024-12-13 06:53:57.464718] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:53.664 [2024-12-13 06:53:57.464726] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:53.664 [2024-12-13 06:53:57.464730] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:53.664 [2024-12-13 06:53:57.464735] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e9e510): datao=0, datal=4096, cccid=0 00:14:53.664 [2024-12-13 06:53:57.464740] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1eea8a0) on tqpair(0x1e9e510): expected_datao=0, payload_size=4096 00:14:53.664 [2024-12-13 06:53:57.464749] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:53.664 [2024-12-13 06:53:57.464754] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:53.664 [2024-12-13 06:53:57.464763] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.664 [2024-12-13 06:53:57.464769] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.664 [2024-12-13 06:53:57.464773] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.664 [2024-12-13 06:53:57.464777] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1eea8a0) on tqpair=0x1e9e510 00:14:53.664 [2024-12-13 06:53:57.464787] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:14:53.664 [2024-12-13 06:53:57.464793] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:14:53.664 [2024-12-13 06:53:57.464798] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:14:53.664 [2024-12-13 06:53:57.464803] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:14:53.664 [2024-12-13 06:53:57.464808] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:14:53.664 [2024-12-13 06:53:57.464814] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:14:53.664 [2024-12-13 06:53:57.464827] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:14:53.664 [2024-12-13 06:53:57.464836] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.664 [2024-12-13 06:53:57.464840] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.664 [2024-12-13 06:53:57.464844] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e9e510) 00:14:53.664 [2024-12-13 06:53:57.464853] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:53.664 [2024-12-13 06:53:57.464872] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eea8a0, cid 0, qid 0 00:14:53.664 [2024-12-13 06:53:57.464950] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.664 [2024-12-13 06:53:57.464957] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.664 [2024-12-13 06:53:57.464961] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.664 [2024-12-13 06:53:57.464965] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1eea8a0) on tqpair=0x1e9e510 00:14:53.664 [2024-12-13 06:53:57.464974] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.664 [2024-12-13 06:53:57.464978] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.664 [2024-12-13 06:53:57.464982] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e9e510) 00:14:53.664 [2024-12-13 06:53:57.464989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:53.664 [2024-12-13 06:53:57.464996] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.664 [2024-12-13 06:53:57.465000] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.664 [2024-12-13 06:53:57.465004] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1e9e510) 00:14:53.664 [2024-12-13 06:53:57.465010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:53.664 [2024-12-13 06:53:57.465017] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.664 [2024-12-13 06:53:57.465021] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.664 [2024-12-13 06:53:57.465025] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1e9e510) 00:14:53.664 [2024-12-13 06:53:57.465031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:53.664 [2024-12-13 06:53:57.465037] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.664 [2024-12-13 06:53:57.465041] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.664 [2024-12-13 06:53:57.465045] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e9e510) 00:14:53.664 [2024-12-13 06:53:57.465052] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:53.664 [2024-12-13 06:53:57.465057] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:14:53.664 [2024-12-13 06:53:57.465070] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:53.664 [2024-12-13 06:53:57.465078] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.664 [2024-12-13 06:53:57.465083] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.665 [2024-12-13 06:53:57.465086] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e9e510) 00:14:53.665 [2024-12-13 06:53:57.465094] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.665 [2024-12-13 06:53:57.465114] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eea8a0, cid 0, qid 0 00:14:53.665 [2024-12-13 06:53:57.465121] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eeaa00, cid 1, qid 0 00:14:53.665 [2024-12-13 06:53:57.465126] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eeab60, cid 2, qid 0 00:14:53.665 [2024-12-13 06:53:57.465131] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eeacc0, cid 3, qid 0 00:14:53.665 [2024-12-13 06:53:57.465136] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eeae20, cid 4, qid 0 00:14:53.665 [2024-12-13 06:53:57.465232] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.665 [2024-12-13 06:53:57.465250] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.665 [2024-12-13 06:53:57.465255] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.665 [2024-12-13 06:53:57.465259] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1eeae20) on tqpair=0x1e9e510 00:14:53.665 [2024-12-13 06:53:57.465266] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:14:53.665 [2024-12-13 06:53:57.465273] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:14:53.665 [2024-12-13 06:53:57.465285] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.665 [2024-12-13 06:53:57.465290] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.665 [2024-12-13 06:53:57.465294] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e9e510) 00:14:53.665 [2024-12-13 06:53:57.465302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.665 [2024-12-13 06:53:57.465321] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eeae20, cid 4, qid 0 00:14:53.665 [2024-12-13 06:53:57.465405] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:53.665 [2024-12-13 06:53:57.465414] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:53.665 [2024-12-13 06:53:57.465418] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:53.665 [2024-12-13 06:53:57.465422] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e9e510): datao=0, datal=4096, cccid=4 00:14:53.665 [2024-12-13 06:53:57.465427] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1eeae20) on tqpair(0x1e9e510): expected_datao=0, payload_size=4096 00:14:53.665 [2024-12-13 06:53:57.465436] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:53.665 [2024-12-13 06:53:57.465440] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:53.665 [2024-12-13 06:53:57.465455] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.665 [2024-12-13 06:53:57.465461] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.665 [2024-12-13 06:53:57.465465] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.665 [2024-12-13 06:53:57.465469] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1eeae20) on tqpair=0x1e9e510 00:14:53.665 [2024-12-13 06:53:57.465484] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:14:53.665 [2024-12-13 06:53:57.465511] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.665 [2024-12-13 06:53:57.465517] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.665 [2024-12-13 06:53:57.465521] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e9e510) 00:14:53.665 [2024-12-13 06:53:57.465529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.665 [2024-12-13 06:53:57.465537] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.665 [2024-12-13 06:53:57.465541] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.665 [2024-12-13 06:53:57.465545] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e9e510) 00:14:53.665 [2024-12-13 06:53:57.465552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:14:53.665 [2024-12-13 06:53:57.465578] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eeae20, cid 4, qid 0 00:14:53.665 [2024-12-13 06:53:57.465586] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eeaf80, cid 5, qid 0 00:14:53.665 [2024-12-13 06:53:57.465699] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:53.665 [2024-12-13 06:53:57.465706] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:53.665 [2024-12-13 06:53:57.465710] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:53.665 [2024-12-13 06:53:57.465714] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e9e510): datao=0, datal=1024, cccid=4 00:14:53.665 [2024-12-13 06:53:57.465719] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1eeae20) on tqpair(0x1e9e510): expected_datao=0, payload_size=1024 00:14:53.665 [2024-12-13 06:53:57.465727] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:53.665 [2024-12-13 06:53:57.465732] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:53.665 [2024-12-13 06:53:57.465738] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.665 [2024-12-13 06:53:57.465744] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.665 [2024-12-13 06:53:57.465748] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.665 [2024-12-13 06:53:57.465752] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1eeaf80) on tqpair=0x1e9e510 00:14:53.665 [2024-12-13 06:53:57.465770] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.665 [2024-12-13 06:53:57.465778] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.665 [2024-12-13 06:53:57.465782] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.665 [2024-12-13 06:53:57.465786] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1eeae20) on tqpair=0x1e9e510 00:14:53.665 [2024-12-13 06:53:57.465798] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.665 [2024-12-13 06:53:57.465802] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.665 [2024-12-13 06:53:57.465806] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e9e510) 00:14:53.665 [2024-12-13 06:53:57.465814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.665 [2024-12-13 06:53:57.465837] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eeae20, cid 4, qid 0 00:14:53.665 [2024-12-13 06:53:57.465911] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:53.665 [2024-12-13 06:53:57.465919] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:53.665 [2024-12-13 06:53:57.465923] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:53.665 [2024-12-13 06:53:57.465927] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e9e510): datao=0, datal=3072, cccid=4 00:14:53.665 [2024-12-13 06:53:57.465932] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1eeae20) on tqpair(0x1e9e510): expected_datao=0, payload_size=3072 00:14:53.665 [2024-12-13 06:53:57.465940] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:53.665 [2024-12-13 06:53:57.465944] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:53.665 [2024-12-13 06:53:57.465952] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.665 [2024-12-13 06:53:57.465958] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.665 [2024-12-13 06:53:57.465962] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.665 [2024-12-13 06:53:57.465966] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1eeae20) on tqpair=0x1e9e510 00:14:53.665 [2024-12-13 06:53:57.465976] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.665 [2024-12-13 06:53:57.465981] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.665 [2024-12-13 06:53:57.465985] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e9e510) 00:14:53.665 [2024-12-13 06:53:57.465992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.665 [2024-12-13 06:53:57.466015] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eeae20, cid 4, qid 0 00:14:53.665 [2024-12-13 06:53:57.466080] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:53.665 [2024-12-13 06:53:57.466087] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:53.665 [2024-12-13 06:53:57.466091] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:53.665 [2024-12-13 06:53:57.466095] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e9e510): datao=0, datal=8, cccid=4 00:14:53.665 [2024-12-13 06:53:57.466100] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1eeae20) on tqpair(0x1e9e510): expected_datao=0, payload_size=8 00:14:53.665 [2024-12-13 06:53:57.466107] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:53.665 [2024-12-13 06:53:57.466111] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:53.665 [2024-12-13 06:53:57.466126] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.665 [2024-12-13 06:53:57.466134] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.666 [2024-12-13 06:53:57.466137] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.666 [2024-12-13 06:53:57.466142] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1eeae20) on tqpair=0x1e9e510 00:14:53.666 ===================================================== 00:14:53.666 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:14:53.666 ===================================================== 00:14:53.666 Controller Capabilities/Features 00:14:53.666 ================================ 00:14:53.666 Vendor ID: 0000 00:14:53.666 Subsystem Vendor ID: 0000 00:14:53.666 Serial Number: .................... 00:14:53.666 Model Number: ........................................ 00:14:53.666 Firmware Version: 24.01.1 00:14:53.666 Recommended Arb Burst: 0 00:14:53.666 IEEE OUI Identifier: 00 00 00 00:14:53.666 Multi-path I/O 00:14:53.666 May have multiple subsystem ports: No 00:14:53.666 May have multiple controllers: No 00:14:53.666 Associated with SR-IOV VF: No 00:14:53.666 Max Data Transfer Size: 131072 00:14:53.666 Max Number of Namespaces: 0 00:14:53.666 Max Number of I/O Queues: 1024 00:14:53.666 NVMe Specification Version (VS): 1.3 00:14:53.666 NVMe Specification Version (Identify): 1.3 00:14:53.666 Maximum Queue Entries: 128 00:14:53.666 Contiguous Queues Required: Yes 00:14:53.666 Arbitration Mechanisms Supported 00:14:53.666 Weighted Round Robin: Not Supported 00:14:53.666 Vendor Specific: Not Supported 00:14:53.666 Reset Timeout: 15000 ms 00:14:53.666 Doorbell Stride: 4 bytes 00:14:53.666 NVM Subsystem Reset: Not Supported 00:14:53.666 Command Sets Supported 00:14:53.666 NVM Command Set: Supported 00:14:53.666 Boot Partition: Not Supported 00:14:53.666 Memory Page Size Minimum: 4096 bytes 00:14:53.666 Memory Page Size Maximum: 4096 bytes 00:14:53.666 Persistent Memory Region: Not Supported 00:14:53.666 Optional Asynchronous Events Supported 00:14:53.666 Namespace Attribute Notices: Not Supported 00:14:53.666 Firmware Activation Notices: Not Supported 00:14:53.666 ANA Change Notices: Not Supported 00:14:53.666 PLE Aggregate Log Change Notices: Not Supported 00:14:53.666 LBA Status Info Alert Notices: Not Supported 00:14:53.666 EGE Aggregate Log Change Notices: Not Supported 00:14:53.666 Normal NVM Subsystem Shutdown event: Not Supported 00:14:53.666 Zone Descriptor Change Notices: Not Supported 00:14:53.666 Discovery Log Change Notices: Supported 00:14:53.666 Controller Attributes 00:14:53.666 128-bit Host Identifier: Not Supported 00:14:53.666 Non-Operational Permissive Mode: Not Supported 00:14:53.666 NVM Sets: Not Supported 00:14:53.666 Read Recovery Levels: Not Supported 00:14:53.666 Endurance Groups: Not Supported 00:14:53.666 Predictable Latency Mode: Not Supported 00:14:53.666 Traffic Based Keep ALive: Not Supported 00:14:53.666 Namespace Granularity: Not Supported 00:14:53.666 SQ Associations: Not Supported 00:14:53.666 UUID List: Not Supported 00:14:53.666 Multi-Domain Subsystem: Not Supported 00:14:53.666 Fixed Capacity Management: Not Supported 00:14:53.666 Variable Capacity Management: Not Supported 00:14:53.666 Delete Endurance Group: Not Supported 00:14:53.666 Delete NVM Set: Not Supported 00:14:53.666 Extended LBA Formats Supported: Not Supported 00:14:53.666 Flexible Data Placement Supported: Not Supported 00:14:53.666 00:14:53.666 Controller Memory Buffer Support 00:14:53.666 ================================ 00:14:53.666 Supported: No 00:14:53.666 00:14:53.666 Persistent Memory Region Support 00:14:53.666 ================================ 00:14:53.666 Supported: No 00:14:53.666 00:14:53.666 Admin Command Set Attributes 00:14:53.666 ============================ 00:14:53.666 Security Send/Receive: Not Supported 00:14:53.666 Format NVM: Not Supported 00:14:53.666 Firmware Activate/Download: Not Supported 00:14:53.666 Namespace Management: Not Supported 00:14:53.666 Device Self-Test: Not Supported 00:14:53.666 Directives: Not Supported 00:14:53.666 NVMe-MI: Not Supported 00:14:53.666 Virtualization Management: Not Supported 00:14:53.666 Doorbell Buffer Config: Not Supported 00:14:53.666 Get LBA Status Capability: Not Supported 00:14:53.666 Command & Feature Lockdown Capability: Not Supported 00:14:53.666 Abort Command Limit: 1 00:14:53.666 Async Event Request Limit: 4 00:14:53.666 Number of Firmware Slots: N/A 00:14:53.666 Firmware Slot 1 Read-Only: N/A 00:14:53.666 Firmware Activation Without Reset: N/A 00:14:53.666 Multiple Update Detection Support: N/A 00:14:53.666 Firmware Update Granularity: No Information Provided 00:14:53.666 Per-Namespace SMART Log: No 00:14:53.666 Asymmetric Namespace Access Log Page: Not Supported 00:14:53.666 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:14:53.666 Command Effects Log Page: Not Supported 00:14:53.666 Get Log Page Extended Data: Supported 00:14:53.666 Telemetry Log Pages: Not Supported 00:14:53.666 Persistent Event Log Pages: Not Supported 00:14:53.666 Supported Log Pages Log Page: May Support 00:14:53.666 Commands Supported & Effects Log Page: Not Supported 00:14:53.666 Feature Identifiers & Effects Log Page:May Support 00:14:53.666 NVMe-MI Commands & Effects Log Page: May Support 00:14:53.666 Data Area 4 for Telemetry Log: Not Supported 00:14:53.666 Error Log Page Entries Supported: 128 00:14:53.666 Keep Alive: Not Supported 00:14:53.666 00:14:53.666 NVM Command Set Attributes 00:14:53.666 ========================== 00:14:53.666 Submission Queue Entry Size 00:14:53.666 Max: 1 00:14:53.666 Min: 1 00:14:53.666 Completion Queue Entry Size 00:14:53.666 Max: 1 00:14:53.666 Min: 1 00:14:53.666 Number of Namespaces: 0 00:14:53.666 Compare Command: Not Supported 00:14:53.666 Write Uncorrectable Command: Not Supported 00:14:53.666 Dataset Management Command: Not Supported 00:14:53.666 Write Zeroes Command: Not Supported 00:14:53.666 Set Features Save Field: Not Supported 00:14:53.666 Reservations: Not Supported 00:14:53.666 Timestamp: Not Supported 00:14:53.666 Copy: Not Supported 00:14:53.666 Volatile Write Cache: Not Present 00:14:53.666 Atomic Write Unit (Normal): 1 00:14:53.666 Atomic Write Unit (PFail): 1 00:14:53.666 Atomic Compare & Write Unit: 1 00:14:53.666 Fused Compare & Write: Supported 00:14:53.666 Scatter-Gather List 00:14:53.666 SGL Command Set: Supported 00:14:53.666 SGL Keyed: Supported 00:14:53.666 SGL Bit Bucket Descriptor: Not Supported 00:14:53.666 SGL Metadata Pointer: Not Supported 00:14:53.666 Oversized SGL: Not Supported 00:14:53.666 SGL Metadata Address: Not Supported 00:14:53.666 SGL Offset: Supported 00:14:53.666 Transport SGL Data Block: Not Supported 00:14:53.666 Replay Protected Memory Block: Not Supported 00:14:53.666 00:14:53.666 Firmware Slot Information 00:14:53.666 ========================= 00:14:53.666 Active slot: 0 00:14:53.666 00:14:53.666 00:14:53.666 Error Log 00:14:53.666 ========= 00:14:53.666 00:14:53.666 Active Namespaces 00:14:53.666 ================= 00:14:53.666 Discovery Log Page 00:14:53.666 ================== 00:14:53.666 Generation Counter: 2 00:14:53.666 Number of Records: 2 00:14:53.666 Record Format: 0 00:14:53.666 00:14:53.666 Discovery Log Entry 0 00:14:53.666 ---------------------- 00:14:53.666 Transport Type: 3 (TCP) 00:14:53.666 Address Family: 1 (IPv4) 00:14:53.666 Subsystem Type: 3 (Current Discovery Subsystem) 00:14:53.666 Entry Flags: 00:14:53.666 Duplicate Returned Information: 1 00:14:53.666 Explicit Persistent Connection Support for Discovery: 1 00:14:53.666 Transport Requirements: 00:14:53.666 Secure Channel: Not Required 00:14:53.666 Port ID: 0 (0x0000) 00:14:53.666 Controller ID: 65535 (0xffff) 00:14:53.666 Admin Max SQ Size: 128 00:14:53.666 Transport Service Identifier: 4420 00:14:53.666 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:14:53.666 Transport Address: 10.0.0.2 00:14:53.666 Discovery Log Entry 1 00:14:53.666 ---------------------- 00:14:53.666 Transport Type: 3 (TCP) 00:14:53.666 Address Family: 1 (IPv4) 00:14:53.666 Subsystem Type: 2 (NVM Subsystem) 00:14:53.666 Entry Flags: 00:14:53.666 Duplicate Returned Information: 0 00:14:53.666 Explicit Persistent Connection Support for Discovery: 0 00:14:53.666 Transport Requirements: 00:14:53.666 Secure Channel: Not Required 00:14:53.666 Port ID: 0 (0x0000) 00:14:53.666 Controller ID: 65535 (0xffff) 00:14:53.666 Admin Max SQ Size: 128 00:14:53.666 Transport Service Identifier: 4420 00:14:53.666 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:14:53.666 Transport Address: 10.0.0.2 [2024-12-13 06:53:57.466263] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:14:53.666 [2024-12-13 06:53:57.466283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:53.667 [2024-12-13 06:53:57.466291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:53.667 [2024-12-13 06:53:57.466298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:53.667 [2024-12-13 06:53:57.466304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:53.667 [2024-12-13 06:53:57.466315] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.667 [2024-12-13 06:53:57.466319] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.667 [2024-12-13 06:53:57.466324] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e9e510) 00:14:53.667 [2024-12-13 06:53:57.466332] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.667 [2024-12-13 06:53:57.466374] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eeacc0, cid 3, qid 0 00:14:53.667 [2024-12-13 06:53:57.466438] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.667 [2024-12-13 06:53:57.466446] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.667 [2024-12-13 06:53:57.466450] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.667 [2024-12-13 06:53:57.466455] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1eeacc0) on tqpair=0x1e9e510 00:14:53.667 [2024-12-13 06:53:57.466464] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.667 [2024-12-13 06:53:57.466468] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.667 [2024-12-13 06:53:57.466472] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e9e510) 00:14:53.667 [2024-12-13 06:53:57.466480] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.667 [2024-12-13 06:53:57.466503] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eeacc0, cid 3, qid 0 00:14:53.667 [2024-12-13 06:53:57.466578] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.667 [2024-12-13 06:53:57.466585] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.667 [2024-12-13 06:53:57.466589] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.667 [2024-12-13 06:53:57.466593] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1eeacc0) on tqpair=0x1e9e510 00:14:53.667 [2024-12-13 06:53:57.466600] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:14:53.667 [2024-12-13 06:53:57.466605] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:14:53.667 [2024-12-13 06:53:57.466615] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.667 [2024-12-13 06:53:57.466619] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.667 [2024-12-13 06:53:57.466623] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e9e510) 00:14:53.667 [2024-12-13 06:53:57.466631] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.667 [2024-12-13 06:53:57.466648] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eeacc0, cid 3, qid 0 00:14:53.667 [2024-12-13 06:53:57.466698] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.667 [2024-12-13 06:53:57.466705] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.667 [2024-12-13 06:53:57.466708] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.667 [2024-12-13 06:53:57.466713] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1eeacc0) on tqpair=0x1e9e510 00:14:53.667 [2024-12-13 06:53:57.466725] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.667 [2024-12-13 06:53:57.466729] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.667 [2024-12-13 06:53:57.466733] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e9e510) 00:14:53.667 [2024-12-13 06:53:57.466741] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.667 [2024-12-13 06:53:57.466757] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eeacc0, cid 3, qid 0 00:14:53.667 [2024-12-13 06:53:57.466803] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.667 [2024-12-13 06:53:57.466810] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.667 [2024-12-13 06:53:57.466814] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.667 [2024-12-13 06:53:57.466818] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1eeacc0) on tqpair=0x1e9e510 00:14:53.667 [2024-12-13 06:53:57.466830] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.667 [2024-12-13 06:53:57.466834] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.667 [2024-12-13 06:53:57.466838] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e9e510) 00:14:53.667 [2024-12-13 06:53:57.466846] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.667 [2024-12-13 06:53:57.466862] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eeacc0, cid 3, qid 0 00:14:53.667 [2024-12-13 06:53:57.466910] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.667 [2024-12-13 06:53:57.466917] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.667 [2024-12-13 06:53:57.466921] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.667 [2024-12-13 06:53:57.466926] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1eeacc0) on tqpair=0x1e9e510 00:14:53.667 [2024-12-13 06:53:57.466937] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.667 [2024-12-13 06:53:57.466941] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.667 [2024-12-13 06:53:57.466945] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e9e510) 00:14:53.667 [2024-12-13 06:53:57.466953] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.667 [2024-12-13 06:53:57.466969] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eeacc0, cid 3, qid 0 00:14:53.667 [2024-12-13 06:53:57.467018] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.667 [2024-12-13 06:53:57.467025] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.667 [2024-12-13 06:53:57.467029] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.667 [2024-12-13 06:53:57.467033] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1eeacc0) on tqpair=0x1e9e510 00:14:53.667 [2024-12-13 06:53:57.467045] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.667 [2024-12-13 06:53:57.467049] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.667 [2024-12-13 06:53:57.467053] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e9e510) 00:14:53.667 [2024-12-13 06:53:57.467061] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.667 [2024-12-13 06:53:57.467077] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eeacc0, cid 3, qid 0 00:14:53.667 [2024-12-13 06:53:57.467129] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.667 [2024-12-13 06:53:57.467136] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.667 [2024-12-13 06:53:57.467140] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.667 [2024-12-13 06:53:57.467144] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1eeacc0) on tqpair=0x1e9e510 00:14:53.667 [2024-12-13 06:53:57.467155] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.667 [2024-12-13 06:53:57.467160] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.667 [2024-12-13 06:53:57.467164] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e9e510) 00:14:53.667 [2024-12-13 06:53:57.467172] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.667 [2024-12-13 06:53:57.467188] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eeacc0, cid 3, qid 0 00:14:53.667 [2024-12-13 06:53:57.467237] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.667 [2024-12-13 06:53:57.467244] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.667 [2024-12-13 06:53:57.467247] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.667 [2024-12-13 06:53:57.467252] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1eeacc0) on tqpair=0x1e9e510 00:14:53.667 [2024-12-13 06:53:57.467263] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.667 [2024-12-13 06:53:57.467267] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.667 [2024-12-13 06:53:57.467271] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e9e510) 00:14:53.667 [2024-12-13 06:53:57.467279] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.667 [2024-12-13 06:53:57.467295] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eeacc0, cid 3, qid 0 00:14:53.667 [2024-12-13 06:53:57.467344] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.667 [2024-12-13 06:53:57.470405] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.667 [2024-12-13 06:53:57.470413] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.667 [2024-12-13 06:53:57.470418] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1eeacc0) on tqpair=0x1e9e510 00:14:53.667 [2024-12-13 06:53:57.470434] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.667 [2024-12-13 06:53:57.470439] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.667 [2024-12-13 06:53:57.470443] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e9e510) 00:14:53.667 [2024-12-13 06:53:57.470452] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.667 [2024-12-13 06:53:57.470478] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eeacc0, cid 3, qid 0 00:14:53.667 [2024-12-13 06:53:57.470531] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.667 [2024-12-13 06:53:57.470538] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.667 [2024-12-13 06:53:57.470542] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.667 [2024-12-13 06:53:57.470546] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1eeacc0) on tqpair=0x1e9e510 00:14:53.667 [2024-12-13 06:53:57.470556] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 3 milliseconds 00:14:53.667 00:14:53.667 06:53:57 -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:14:53.667 [2024-12-13 06:53:57.509988] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:53.667 [2024-12-13 06:53:57.510200] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80323 ] 00:14:53.668 [2024-12-13 06:53:57.651583] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:14:53.668 [2024-12-13 06:53:57.651662] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:14:53.668 [2024-12-13 06:53:57.651670] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:14:53.668 [2024-12-13 06:53:57.651682] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:14:53.668 [2024-12-13 06:53:57.651710] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl uring 00:14:53.668 [2024-12-13 06:53:57.651853] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:14:53.668 [2024-12-13 06:53:57.651934] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x824510 0 00:14:53.668 [2024-12-13 06:53:57.659410] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:14:53.668 [2024-12-13 06:53:57.659434] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:14:53.668 [2024-12-13 06:53:57.659456] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:14:53.668 [2024-12-13 06:53:57.659460] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:14:53.668 [2024-12-13 06:53:57.659502] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.668 [2024-12-13 06:53:57.659509] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.668 [2024-12-13 06:53:57.659513] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x824510) 00:14:53.668 [2024-12-13 06:53:57.659526] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:53.668 [2024-12-13 06:53:57.659557] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8708a0, cid 0, qid 0 00:14:53.668 [2024-12-13 06:53:57.666412] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.668 [2024-12-13 06:53:57.666435] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.668 [2024-12-13 06:53:57.666456] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.668 [2024-12-13 06:53:57.666461] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8708a0) on tqpair=0x824510 00:14:53.668 [2024-12-13 06:53:57.666481] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:14:53.668 [2024-12-13 06:53:57.666489] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:14:53.668 [2024-12-13 06:53:57.666495] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:14:53.668 [2024-12-13 06:53:57.666511] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.668 [2024-12-13 06:53:57.666516] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.668 [2024-12-13 06:53:57.666520] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x824510) 00:14:53.668 [2024-12-13 06:53:57.666529] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.668 [2024-12-13 06:53:57.666557] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8708a0, cid 0, qid 0 00:14:53.668 [2024-12-13 06:53:57.666613] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.668 [2024-12-13 06:53:57.666620] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.668 [2024-12-13 06:53:57.666624] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.668 [2024-12-13 06:53:57.666628] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8708a0) on tqpair=0x824510 00:14:53.668 [2024-12-13 06:53:57.666633] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:14:53.668 [2024-12-13 06:53:57.666641] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:14:53.668 [2024-12-13 06:53:57.666649] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.668 [2024-12-13 06:53:57.666653] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.668 [2024-12-13 06:53:57.666657] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x824510) 00:14:53.668 [2024-12-13 06:53:57.666665] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.668 [2024-12-13 06:53:57.666683] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8708a0, cid 0, qid 0 00:14:53.668 [2024-12-13 06:53:57.666760] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.668 [2024-12-13 06:53:57.666766] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.668 [2024-12-13 06:53:57.666770] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.668 [2024-12-13 06:53:57.666775] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8708a0) on tqpair=0x824510 00:14:53.668 [2024-12-13 06:53:57.666781] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:14:53.668 [2024-12-13 06:53:57.666789] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:14:53.668 [2024-12-13 06:53:57.666797] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.668 [2024-12-13 06:53:57.666801] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.668 [2024-12-13 06:53:57.666805] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x824510) 00:14:53.668 [2024-12-13 06:53:57.666812] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.668 [2024-12-13 06:53:57.666829] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8708a0, cid 0, qid 0 00:14:53.668 [2024-12-13 06:53:57.666876] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.668 [2024-12-13 06:53:57.666882] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.668 [2024-12-13 06:53:57.666886] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.668 [2024-12-13 06:53:57.666890] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8708a0) on tqpair=0x824510 00:14:53.668 [2024-12-13 06:53:57.666896] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:53.668 [2024-12-13 06:53:57.666906] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.668 [2024-12-13 06:53:57.666910] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.668 [2024-12-13 06:53:57.666914] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x824510) 00:14:53.668 [2024-12-13 06:53:57.666921] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.668 [2024-12-13 06:53:57.666937] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8708a0, cid 0, qid 0 00:14:53.668 [2024-12-13 06:53:57.666984] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.668 [2024-12-13 06:53:57.666990] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.668 [2024-12-13 06:53:57.666994] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.668 [2024-12-13 06:53:57.666998] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8708a0) on tqpair=0x824510 00:14:53.668 [2024-12-13 06:53:57.667003] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:14:53.668 [2024-12-13 06:53:57.667008] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:14:53.668 [2024-12-13 06:53:57.667015] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:53.668 [2024-12-13 06:53:57.667121] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:14:53.668 [2024-12-13 06:53:57.667125] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:53.668 [2024-12-13 06:53:57.667134] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.668 [2024-12-13 06:53:57.667138] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.668 [2024-12-13 06:53:57.667142] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x824510) 00:14:53.668 [2024-12-13 06:53:57.667149] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.668 [2024-12-13 06:53:57.667166] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8708a0, cid 0, qid 0 00:14:53.668 [2024-12-13 06:53:57.667215] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.668 [2024-12-13 06:53:57.667222] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.668 [2024-12-13 06:53:57.667225] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.668 [2024-12-13 06:53:57.667229] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8708a0) on tqpair=0x824510 00:14:53.668 [2024-12-13 06:53:57.667235] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:53.668 [2024-12-13 06:53:57.667244] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.668 [2024-12-13 06:53:57.667249] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.668 [2024-12-13 06:53:57.667253] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x824510) 00:14:53.668 [2024-12-13 06:53:57.667260] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.668 [2024-12-13 06:53:57.667276] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8708a0, cid 0, qid 0 00:14:53.668 [2024-12-13 06:53:57.667326] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.668 [2024-12-13 06:53:57.667332] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.668 [2024-12-13 06:53:57.667336] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.668 [2024-12-13 06:53:57.667340] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8708a0) on tqpair=0x824510 00:14:53.668 [2024-12-13 06:53:57.667345] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:53.668 [2024-12-13 06:53:57.667350] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:14:53.668 [2024-12-13 06:53:57.667359] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:14:53.668 [2024-12-13 06:53:57.667390] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:14:53.668 [2024-12-13 06:53:57.667400] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.668 [2024-12-13 06:53:57.667404] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.668 [2024-12-13 06:53:57.667424] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x824510) 00:14:53.668 [2024-12-13 06:53:57.667450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.668 [2024-12-13 06:53:57.667472] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8708a0, cid 0, qid 0 00:14:53.668 [2024-12-13 06:53:57.667567] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:53.668 [2024-12-13 06:53:57.667575] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:53.668 [2024-12-13 06:53:57.667580] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:53.669 [2024-12-13 06:53:57.667584] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x824510): datao=0, datal=4096, cccid=0 00:14:53.669 [2024-12-13 06:53:57.667589] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8708a0) on tqpair(0x824510): expected_datao=0, payload_size=4096 00:14:53.669 [2024-12-13 06:53:57.667599] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:53.669 [2024-12-13 06:53:57.667604] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:53.669 [2024-12-13 06:53:57.667612] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.669 [2024-12-13 06:53:57.667619] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.669 [2024-12-13 06:53:57.667623] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.669 [2024-12-13 06:53:57.667627] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8708a0) on tqpair=0x824510 00:14:53.669 [2024-12-13 06:53:57.667636] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:14:53.669 [2024-12-13 06:53:57.667642] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:14:53.669 [2024-12-13 06:53:57.667647] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:14:53.669 [2024-12-13 06:53:57.667652] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:14:53.669 [2024-12-13 06:53:57.667657] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:14:53.669 [2024-12-13 06:53:57.667662] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:14:53.669 [2024-12-13 06:53:57.667677] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:14:53.669 [2024-12-13 06:53:57.667685] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.669 [2024-12-13 06:53:57.667689] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.669 [2024-12-13 06:53:57.667694] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x824510) 00:14:53.669 [2024-12-13 06:53:57.667702] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:53.669 [2024-12-13 06:53:57.667737] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8708a0, cid 0, qid 0 00:14:53.669 [2024-12-13 06:53:57.667805] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.669 [2024-12-13 06:53:57.667812] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.669 [2024-12-13 06:53:57.667815] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.669 [2024-12-13 06:53:57.667819] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8708a0) on tqpair=0x824510 00:14:53.669 [2024-12-13 06:53:57.667827] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.669 [2024-12-13 06:53:57.667831] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.669 [2024-12-13 06:53:57.667834] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x824510) 00:14:53.669 [2024-12-13 06:53:57.667841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:53.669 [2024-12-13 06:53:57.667847] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.669 [2024-12-13 06:53:57.667851] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.669 [2024-12-13 06:53:57.667855] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x824510) 00:14:53.669 [2024-12-13 06:53:57.667886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:53.669 [2024-12-13 06:53:57.667893] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.669 [2024-12-13 06:53:57.667897] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.669 [2024-12-13 06:53:57.667902] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x824510) 00:14:53.669 [2024-12-13 06:53:57.667908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:53.669 [2024-12-13 06:53:57.667914] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.669 [2024-12-13 06:53:57.667918] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.669 [2024-12-13 06:53:57.667922] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x824510) 00:14:53.669 [2024-12-13 06:53:57.667928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:53.669 [2024-12-13 06:53:57.667934] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:53.669 [2024-12-13 06:53:57.667948] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:53.669 [2024-12-13 06:53:57.667955] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.669 [2024-12-13 06:53:57.667960] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.669 [2024-12-13 06:53:57.667963] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x824510) 00:14:53.669 [2024-12-13 06:53:57.667971] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.669 [2024-12-13 06:53:57.667993] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8708a0, cid 0, qid 0 00:14:53.669 [2024-12-13 06:53:57.668000] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x870a00, cid 1, qid 0 00:14:53.669 [2024-12-13 06:53:57.668005] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x870b60, cid 2, qid 0 00:14:53.669 [2024-12-13 06:53:57.668011] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x870cc0, cid 3, qid 0 00:14:53.669 [2024-12-13 06:53:57.668016] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x870e20, cid 4, qid 0 00:14:53.669 [2024-12-13 06:53:57.668119] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.669 [2024-12-13 06:53:57.668126] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.669 [2024-12-13 06:53:57.668130] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.669 [2024-12-13 06:53:57.668134] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x870e20) on tqpair=0x824510 00:14:53.669 [2024-12-13 06:53:57.668140] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:14:53.669 [2024-12-13 06:53:57.668145] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:53.669 [2024-12-13 06:53:57.668154] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:14:53.669 [2024-12-13 06:53:57.668165] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:53.669 [2024-12-13 06:53:57.668173] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.669 [2024-12-13 06:53:57.668189] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.669 [2024-12-13 06:53:57.668193] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x824510) 00:14:53.669 [2024-12-13 06:53:57.668215] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:53.669 [2024-12-13 06:53:57.668233] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x870e20, cid 4, qid 0 00:14:53.669 [2024-12-13 06:53:57.668288] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.669 [2024-12-13 06:53:57.668294] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.669 [2024-12-13 06:53:57.668298] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.669 [2024-12-13 06:53:57.668302] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x870e20) on tqpair=0x824510 00:14:53.669 [2024-12-13 06:53:57.668372] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:14:53.669 [2024-12-13 06:53:57.668383] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:53.669 [2024-12-13 06:53:57.668392] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.669 [2024-12-13 06:53:57.668396] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.669 [2024-12-13 06:53:57.668412] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x824510) 00:14:53.669 [2024-12-13 06:53:57.668421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.669 [2024-12-13 06:53:57.668442] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x870e20, cid 4, qid 0 00:14:53.669 [2024-12-13 06:53:57.668504] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:53.669 [2024-12-13 06:53:57.668511] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:53.669 [2024-12-13 06:53:57.668514] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:53.669 [2024-12-13 06:53:57.668518] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x824510): datao=0, datal=4096, cccid=4 00:14:53.669 [2024-12-13 06:53:57.668523] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x870e20) on tqpair(0x824510): expected_datao=0, payload_size=4096 00:14:53.669 [2024-12-13 06:53:57.668531] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:53.669 [2024-12-13 06:53:57.668535] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:53.669 [2024-12-13 06:53:57.668544] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.669 [2024-12-13 06:53:57.668550] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.669 [2024-12-13 06:53:57.668553] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.669 [2024-12-13 06:53:57.668557] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x870e20) on tqpair=0x824510 00:14:53.669 [2024-12-13 06:53:57.668572] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:14:53.669 [2024-12-13 06:53:57.668583] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:14:53.669 [2024-12-13 06:53:57.668594] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:14:53.669 [2024-12-13 06:53:57.668602] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.669 [2024-12-13 06:53:57.668607] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.669 [2024-12-13 06:53:57.668610] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x824510) 00:14:53.669 [2024-12-13 06:53:57.668618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.669 [2024-12-13 06:53:57.668636] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x870e20, cid 4, qid 0 00:14:53.670 [2024-12-13 06:53:57.668726] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:53.670 [2024-12-13 06:53:57.668733] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:53.670 [2024-12-13 06:53:57.668737] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:53.670 [2024-12-13 06:53:57.668740] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x824510): datao=0, datal=4096, cccid=4 00:14:53.670 [2024-12-13 06:53:57.668745] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x870e20) on tqpair(0x824510): expected_datao=0, payload_size=4096 00:14:53.670 [2024-12-13 06:53:57.668753] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:53.670 [2024-12-13 06:53:57.668756] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:53.670 [2024-12-13 06:53:57.668765] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.670 [2024-12-13 06:53:57.668770] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.670 [2024-12-13 06:53:57.668774] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.670 [2024-12-13 06:53:57.668778] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x870e20) on tqpair=0x824510 00:14:53.670 [2024-12-13 06:53:57.668792] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:53.670 [2024-12-13 06:53:57.668802] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:53.670 [2024-12-13 06:53:57.668810] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.670 [2024-12-13 06:53:57.668815] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.670 [2024-12-13 06:53:57.668819] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x824510) 00:14:53.670 [2024-12-13 06:53:57.668826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.670 [2024-12-13 06:53:57.668844] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x870e20, cid 4, qid 0 00:14:53.670 [2024-12-13 06:53:57.668905] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:53.670 [2024-12-13 06:53:57.668912] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:53.670 [2024-12-13 06:53:57.668916] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:53.670 [2024-12-13 06:53:57.668919] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x824510): datao=0, datal=4096, cccid=4 00:14:53.670 [2024-12-13 06:53:57.668924] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x870e20) on tqpair(0x824510): expected_datao=0, payload_size=4096 00:14:53.670 [2024-12-13 06:53:57.668931] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:53.670 [2024-12-13 06:53:57.668935] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:53.670 [2024-12-13 06:53:57.668943] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.670 [2024-12-13 06:53:57.668949] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.670 [2024-12-13 06:53:57.668953] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.670 [2024-12-13 06:53:57.668956] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x870e20) on tqpair=0x824510 00:14:53.670 [2024-12-13 06:53:57.668965] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:53.670 [2024-12-13 06:53:57.668973] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:14:53.670 [2024-12-13 06:53:57.668983] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:14:53.670 [2024-12-13 06:53:57.668989] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:53.670 [2024-12-13 06:53:57.668995] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:14:53.670 [2024-12-13 06:53:57.669000] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:14:53.670 [2024-12-13 06:53:57.669005] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:14:53.670 [2024-12-13 06:53:57.669010] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:14:53.670 [2024-12-13 06:53:57.669026] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.670 [2024-12-13 06:53:57.669031] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.670 [2024-12-13 06:53:57.669035] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x824510) 00:14:53.670 [2024-12-13 06:53:57.669042] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.670 [2024-12-13 06:53:57.669049] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.670 [2024-12-13 06:53:57.669053] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.670 [2024-12-13 06:53:57.669057] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x824510) 00:14:53.670 [2024-12-13 06:53:57.669063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:14:53.670 [2024-12-13 06:53:57.669086] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x870e20, cid 4, qid 0 00:14:53.670 [2024-12-13 06:53:57.669094] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x870f80, cid 5, qid 0 00:14:53.670 [2024-12-13 06:53:57.669154] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.670 [2024-12-13 06:53:57.669161] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.670 [2024-12-13 06:53:57.669165] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.670 [2024-12-13 06:53:57.669169] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x870e20) on tqpair=0x824510 00:14:53.670 [2024-12-13 06:53:57.669176] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.670 [2024-12-13 06:53:57.669182] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.670 [2024-12-13 06:53:57.669185] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.670 [2024-12-13 06:53:57.669189] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x870f80) on tqpair=0x824510 00:14:53.670 [2024-12-13 06:53:57.669199] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.670 [2024-12-13 06:53:57.669204] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.670 [2024-12-13 06:53:57.669207] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x824510) 00:14:53.670 [2024-12-13 06:53:57.669214] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.670 [2024-12-13 06:53:57.669231] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x870f80, cid 5, qid 0 00:14:53.670 [2024-12-13 06:53:57.669275] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.670 [2024-12-13 06:53:57.669282] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.670 [2024-12-13 06:53:57.669285] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.670 [2024-12-13 06:53:57.669289] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x870f80) on tqpair=0x824510 00:14:53.670 [2024-12-13 06:53:57.669299] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.670 [2024-12-13 06:53:57.669304] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.670 [2024-12-13 06:53:57.669307] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x824510) 00:14:53.670 [2024-12-13 06:53:57.669314] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.670 [2024-12-13 06:53:57.669346] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x870f80, cid 5, qid 0 00:14:53.670 [2024-12-13 06:53:57.669429] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.670 [2024-12-13 06:53:57.669439] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.670 [2024-12-13 06:53:57.669443] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.670 [2024-12-13 06:53:57.669447] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x870f80) on tqpair=0x824510 00:14:53.670 [2024-12-13 06:53:57.669458] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.670 [2024-12-13 06:53:57.669463] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.670 [2024-12-13 06:53:57.669467] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x824510) 00:14:53.670 [2024-12-13 06:53:57.669474] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.670 [2024-12-13 06:53:57.669494] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x870f80, cid 5, qid 0 00:14:53.670 [2024-12-13 06:53:57.669542] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.670 [2024-12-13 06:53:57.669549] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.670 [2024-12-13 06:53:57.669553] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.670 [2024-12-13 06:53:57.669557] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x870f80) on tqpair=0x824510 00:14:53.670 [2024-12-13 06:53:57.669571] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.670 [2024-12-13 06:53:57.669576] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.670 [2024-12-13 06:53:57.669580] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x824510) 00:14:53.670 [2024-12-13 06:53:57.669587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.671 [2024-12-13 06:53:57.669595] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.671 [2024-12-13 06:53:57.669600] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.671 [2024-12-13 06:53:57.669604] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x824510) 00:14:53.671 [2024-12-13 06:53:57.669610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.671 [2024-12-13 06:53:57.669618] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.671 [2024-12-13 06:53:57.669622] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.671 [2024-12-13 06:53:57.669626] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x824510) 00:14:53.671 [2024-12-13 06:53:57.669633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.671 [2024-12-13 06:53:57.669641] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.671 [2024-12-13 06:53:57.669646] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.671 [2024-12-13 06:53:57.669650] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x824510) 00:14:53.671 [2024-12-13 06:53:57.669656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.671 [2024-12-13 06:53:57.669675] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x870f80, cid 5, qid 0 00:14:53.671 [2024-12-13 06:53:57.669683] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x870e20, cid 4, qid 0 00:14:53.671 [2024-12-13 06:53:57.669688] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8710e0, cid 6, qid 0 00:14:53.671 [2024-12-13 06:53:57.669693] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x871240, cid 7, qid 0 00:14:53.671 [2024-12-13 06:53:57.669854] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:53.671 [2024-12-13 06:53:57.669861] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:53.671 [2024-12-13 06:53:57.669865] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:53.671 [2024-12-13 06:53:57.669868] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x824510): datao=0, datal=8192, cccid=5 00:14:53.671 [2024-12-13 06:53:57.669873] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x870f80) on tqpair(0x824510): expected_datao=0, payload_size=8192 00:14:53.671 [2024-12-13 06:53:57.669891] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:53.671 [2024-12-13 06:53:57.669896] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:53.671 [2024-12-13 06:53:57.669902] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:53.671 [2024-12-13 06:53:57.669907] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:53.671 [2024-12-13 06:53:57.669911] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:53.671 [2024-12-13 06:53:57.669915] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x824510): datao=0, datal=512, cccid=4 00:14:53.671 [2024-12-13 06:53:57.669919] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x870e20) on tqpair(0x824510): expected_datao=0, payload_size=512 00:14:53.671 [2024-12-13 06:53:57.669926] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:53.671 [2024-12-13 06:53:57.669930] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:53.671 [2024-12-13 06:53:57.669936] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:53.671 [2024-12-13 06:53:57.669942] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:53.671 [2024-12-13 06:53:57.669945] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:53.671 [2024-12-13 06:53:57.669949] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x824510): datao=0, datal=512, cccid=6 00:14:53.671 [2024-12-13 06:53:57.669953] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8710e0) on tqpair(0x824510): expected_datao=0, payload_size=512 00:14:53.671 [2024-12-13 06:53:57.669960] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:53.671 [2024-12-13 06:53:57.669964] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:53.671 [2024-12-13 06:53:57.669969] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:53.671 [2024-12-13 06:53:57.669975] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:53.671 [2024-12-13 06:53:57.669978] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:53.671 [2024-12-13 06:53:57.669982] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x824510): datao=0, datal=4096, cccid=7 00:14:53.671 [2024-12-13 06:53:57.669987] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x871240) on tqpair(0x824510): expected_datao=0, payload_size=4096 00:14:53.671 [2024-12-13 06:53:57.669994] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:53.671 [2024-12-13 06:53:57.669997] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:53.671 [2024-12-13 06:53:57.670005] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.671 [2024-12-13 06:53:57.670011] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.671 [2024-12-13 06:53:57.670015] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.671 [2024-12-13 06:53:57.670019] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x870f80) on tqpair=0x824510 00:14:53.671 [2024-12-13 06:53:57.670034] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.671 [2024-12-13 06:53:57.670041] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.671 [2024-12-13 06:53:57.670044] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.671 [2024-12-13 06:53:57.670048] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x870e20) on tqpair=0x824510 00:14:53.671 [2024-12-13 06:53:57.670060] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.671 [2024-12-13 06:53:57.670066] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.671 [2024-12-13 06:53:57.670070] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.671 ===================================================== 00:14:53.671 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:53.671 ===================================================== 00:14:53.671 Controller Capabilities/Features 00:14:53.671 ================================ 00:14:53.671 Vendor ID: 8086 00:14:53.671 Subsystem Vendor ID: 8086 00:14:53.671 Serial Number: SPDK00000000000001 00:14:53.671 Model Number: SPDK bdev Controller 00:14:53.671 Firmware Version: 24.01.1 00:14:53.671 Recommended Arb Burst: 6 00:14:53.671 IEEE OUI Identifier: e4 d2 5c 00:14:53.671 Multi-path I/O 00:14:53.671 May have multiple subsystem ports: Yes 00:14:53.671 May have multiple controllers: Yes 00:14:53.671 Associated with SR-IOV VF: No 00:14:53.671 Max Data Transfer Size: 131072 00:14:53.671 Max Number of Namespaces: 32 00:14:53.671 Max Number of I/O Queues: 127 00:14:53.671 NVMe Specification Version (VS): 1.3 00:14:53.671 NVMe Specification Version (Identify): 1.3 00:14:53.671 Maximum Queue Entries: 128 00:14:53.671 Contiguous Queues Required: Yes 00:14:53.671 Arbitration Mechanisms Supported 00:14:53.671 Weighted Round Robin: Not Supported 00:14:53.671 Vendor Specific: Not Supported 00:14:53.671 Reset Timeout: 15000 ms 00:14:53.671 Doorbell Stride: 4 bytes 00:14:53.671 NVM Subsystem Reset: Not Supported 00:14:53.671 Command Sets Supported 00:14:53.671 NVM Command Set: Supported 00:14:53.671 Boot Partition: Not Supported 00:14:53.671 Memory Page Size Minimum: 4096 bytes 00:14:53.671 Memory Page Size Maximum: 4096 bytes 00:14:53.671 Persistent Memory Region: Not Supported 00:14:53.671 Optional Asynchronous Events Supported 00:14:53.671 Namespace Attribute Notices: Supported 00:14:53.671 Firmware Activation Notices: Not Supported 00:14:53.671 ANA Change Notices: Not Supported 00:14:53.671 PLE Aggregate Log Change Notices: Not Supported 00:14:53.671 LBA Status Info Alert Notices: Not Supported 00:14:53.671 EGE Aggregate Log Change Notices: Not Supported 00:14:53.671 Normal NVM Subsystem Shutdown event: Not Supported 00:14:53.671 Zone Descriptor Change Notices: Not Supported 00:14:53.671 Discovery Log Change Notices: Not Supported 00:14:53.671 Controller Attributes 00:14:53.671 128-bit Host Identifier: Supported 00:14:53.671 Non-Operational Permissive Mode: Not Supported 00:14:53.671 NVM Sets: Not Supported 00:14:53.671 Read Recovery Levels: Not Supported 00:14:53.671 Endurance Groups: Not Supported 00:14:53.671 Predictable Latency Mode: Not Supported 00:14:53.671 Traffic Based Keep ALive: Not Supported 00:14:53.671 Namespace Granularity: Not Supported 00:14:53.671 SQ Associations: Not Supported 00:14:53.671 UUID List: Not Supported 00:14:53.671 Multi-Domain Subsystem: Not Supported 00:14:53.671 Fixed Capacity Management: Not Supported 00:14:53.671 Variable Capacity Management: Not Supported 00:14:53.671 Delete Endurance Group: Not Supported 00:14:53.671 Delete NVM Set: Not Supported 00:14:53.671 Extended LBA Formats Supported: Not Supported 00:14:53.671 Flexible Data Placement Supported: Not Supported 00:14:53.671 00:14:53.671 Controller Memory Buffer Support 00:14:53.671 ================================ 00:14:53.671 Supported: No 00:14:53.671 00:14:53.671 Persistent Memory Region Support 00:14:53.671 ================================ 00:14:53.671 Supported: No 00:14:53.671 00:14:53.671 Admin Command Set Attributes 00:14:53.671 ============================ 00:14:53.671 Security Send/Receive: Not Supported 00:14:53.671 Format NVM: Not Supported 00:14:53.671 Firmware Activate/Download: Not Supported 00:14:53.671 Namespace Management: Not Supported 00:14:53.671 Device Self-Test: Not Supported 00:14:53.671 Directives: Not Supported 00:14:53.671 NVMe-MI: Not Supported 00:14:53.671 Virtualization Management: Not Supported 00:14:53.671 Doorbell Buffer Config: Not Supported 00:14:53.671 Get LBA Status Capability: Not Supported 00:14:53.671 Command & Feature Lockdown Capability: Not Supported 00:14:53.671 Abort Command Limit: 4 00:14:53.671 Async Event Request Limit: 4 00:14:53.671 Number of Firmware Slots: N/A 00:14:53.671 Firmware Slot 1 Read-Only: N/A 00:14:53.671 Firmware Activation Without Reset: N/A 00:14:53.671 Multiple Update Detection Support: N/A 00:14:53.672 Firmware Update Granularity: No Information Provided 00:14:53.672 Per-Namespace SMART Log: No 00:14:53.672 Asymmetric Namespace Access Log Page: Not Supported 00:14:53.672 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:14:53.672 Command Effects Log Page: Supported 00:14:53.672 Get Log Page Extended Data: Supported 00:14:53.672 Telemetry Log Pages: Not Supported 00:14:53.672 Persistent Event Log Pages: Not Supported 00:14:53.672 Supported Log Pages Log Page: May Support 00:14:53.672 Commands Supported & Effects Log Page: Not Supported 00:14:53.672 Feature Identifiers & Effects Log Page:May Support 00:14:53.672 NVMe-MI Commands & Effects Log Page: May Support 00:14:53.672 Data Area 4 for Telemetry Log: Not Supported 00:14:53.672 Error Log Page Entries Supported: 128 00:14:53.672 Keep Alive: Supported 00:14:53.672 Keep Alive Granularity: 10000 ms 00:14:53.672 00:14:53.672 NVM Command Set Attributes 00:14:53.672 ========================== 00:14:53.672 Submission Queue Entry Size 00:14:53.672 Max: 64 00:14:53.672 Min: 64 00:14:53.672 Completion Queue Entry Size 00:14:53.672 Max: 16 00:14:53.672 Min: 16 00:14:53.672 Number of Namespaces: 32 00:14:53.672 Compare Command: Supported 00:14:53.672 Write Uncorrectable Command: Not Supported 00:14:53.672 Dataset Management Command: Supported 00:14:53.672 Write Zeroes Command: Supported 00:14:53.672 Set Features Save Field: Not Supported 00:14:53.672 Reservations: Supported 00:14:53.672 Timestamp: Not Supported 00:14:53.672 Copy: Supported 00:14:53.672 Volatile Write Cache: Present 00:14:53.672 Atomic Write Unit (Normal): 1 00:14:53.672 Atomic Write Unit (PFail): 1 00:14:53.672 Atomic Compare & Write Unit: 1 00:14:53.672 Fused Compare & Write: Supported 00:14:53.672 Scatter-Gather List 00:14:53.672 SGL Command Set: Supported 00:14:53.672 SGL Keyed: Supported 00:14:53.672 SGL Bit Bucket Descriptor: Not Supported 00:14:53.672 SGL Metadata Pointer: Not Supported 00:14:53.672 Oversized SGL: Not Supported 00:14:53.672 SGL Metadata Address: Not Supported 00:14:53.672 SGL Offset: Supported 00:14:53.672 Transport SGL Data Block: Not Supported 00:14:53.672 Replay Protected Memory Block: Not Supported 00:14:53.672 00:14:53.672 Firmware Slot Information 00:14:53.672 ========================= 00:14:53.672 Active slot: 1 00:14:53.672 Slot 1 Firmware Revision: 24.01.1 00:14:53.672 00:14:53.672 00:14:53.672 Commands Supported and Effects 00:14:53.672 ============================== 00:14:53.672 Admin Commands 00:14:53.672 -------------- 00:14:53.672 Get Log Page (02h): Supported 00:14:53.672 Identify (06h): Supported 00:14:53.672 Abort (08h): Supported 00:14:53.672 Set Features (09h): Supported 00:14:53.672 Get Features (0Ah): Supported 00:14:53.672 Asynchronous Event Request (0Ch): Supported 00:14:53.672 Keep Alive (18h): Supported 00:14:53.672 I/O Commands 00:14:53.672 ------------ 00:14:53.672 Flush (00h): Supported LBA-Change 00:14:53.672 Write (01h): Supported LBA-Change 00:14:53.672 Read (02h): Supported 00:14:53.672 Compare (05h): Supported 00:14:53.672 Write Zeroes (08h): Supported LBA-Change 00:14:53.672 Dataset Management (09h): Supported LBA-Change 00:14:53.672 Copy (19h): Supported LBA-Change 00:14:53.672 Unknown (79h): Supported LBA-Change 00:14:53.672 Unknown (7Ah): Supported 00:14:53.672 00:14:53.672 Error Log 00:14:53.672 ========= 00:14:53.672 00:14:53.672 Arbitration 00:14:53.672 =========== 00:14:53.672 Arbitration Burst: 1 00:14:53.672 00:14:53.672 Power Management 00:14:53.672 ================ 00:14:53.672 Number of Power States: 1 00:14:53.672 Current Power State: Power State #0 00:14:53.672 Power State #0: 00:14:53.672 Max Power: 0.00 W 00:14:53.672 Non-Operational State: Operational 00:14:53.672 Entry Latency: Not Reported 00:14:53.672 Exit Latency: Not Reported 00:14:53.672 Relative Read Throughput: 0 00:14:53.672 Relative Read Latency: 0 00:14:53.672 Relative Write Throughput: 0 00:14:53.672 Relative Write Latency: 0 00:14:53.672 Idle Power: Not Reported 00:14:53.672 Active Power: Not Reported 00:14:53.672 Non-Operational Permissive Mode: Not Supported 00:14:53.672 00:14:53.672 Health Information 00:14:53.672 ================== 00:14:53.672 Critical Warnings: 00:14:53.672 Available Spare Space: OK 00:14:53.672 Temperature: OK 00:14:53.672 Device Reliability: OK 00:14:53.672 Read Only: No 00:14:53.672 Volatile Memory Backup: OK 00:14:53.672 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:53.672 Temperature Threshold: [2024-12-13 06:53:57.670074] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8710e0) on tqpair=0x824510 00:14:53.672 [2024-12-13 06:53:57.670081] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.672 [2024-12-13 06:53:57.670087] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.672 [2024-12-13 06:53:57.670091] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.672 [2024-12-13 06:53:57.670094] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x871240) on tqpair=0x824510 00:14:53.672 [2024-12-13 06:53:57.670201] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.672 [2024-12-13 06:53:57.670208] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.672 [2024-12-13 06:53:57.670211] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x824510) 00:14:53.672 [2024-12-13 06:53:57.670219] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.672 [2024-12-13 06:53:57.670240] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x871240, cid 7, qid 0 00:14:53.672 [2024-12-13 06:53:57.670287] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.672 [2024-12-13 06:53:57.670294] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.672 [2024-12-13 06:53:57.670298] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.672 [2024-12-13 06:53:57.670301] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x871240) on tqpair=0x824510 00:14:53.672 [2024-12-13 06:53:57.670335] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:14:53.672 [2024-12-13 06:53:57.670348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:53.672 [2024-12-13 06:53:57.670355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:53.672 [2024-12-13 06:53:57.670378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:53.672 [2024-12-13 06:53:57.670384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:53.672 [2024-12-13 06:53:57.670405] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.672 [2024-12-13 06:53:57.670412] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.672 [2024-12-13 06:53:57.670416] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x824510) 00:14:53.672 [2024-12-13 06:53:57.670424] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.672 [2024-12-13 06:53:57.670447] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x870cc0, cid 3, qid 0 00:14:53.672 [2024-12-13 06:53:57.670497] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.672 [2024-12-13 06:53:57.670504] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.672 [2024-12-13 06:53:57.670508] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.672 [2024-12-13 06:53:57.670512] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x870cc0) on tqpair=0x824510 00:14:53.672 [2024-12-13 06:53:57.670520] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.672 [2024-12-13 06:53:57.670524] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.672 [2024-12-13 06:53:57.670528] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x824510) 00:14:53.672 [2024-12-13 06:53:57.670536] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.672 [2024-12-13 06:53:57.670556] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x870cc0, cid 3, qid 0 00:14:53.672 [2024-12-13 06:53:57.670622] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.672 [2024-12-13 06:53:57.670629] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.672 [2024-12-13 06:53:57.670632] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.672 [2024-12-13 06:53:57.670636] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x870cc0) on tqpair=0x824510 00:14:53.672 [2024-12-13 06:53:57.670641] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:14:53.672 [2024-12-13 06:53:57.670646] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:14:53.672 [2024-12-13 06:53:57.670656] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.672 [2024-12-13 06:53:57.670661] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.673 [2024-12-13 06:53:57.670665] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x824510) 00:14:53.673 [2024-12-13 06:53:57.670672] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.673 [2024-12-13 06:53:57.670689] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x870cc0, cid 3, qid 0 00:14:53.673 [2024-12-13 06:53:57.670753] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.673 [2024-12-13 06:53:57.670760] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.673 [2024-12-13 06:53:57.670763] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.673 [2024-12-13 06:53:57.670767] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x870cc0) on tqpair=0x824510 00:14:53.673 [2024-12-13 06:53:57.670778] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.673 [2024-12-13 06:53:57.670782] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.673 [2024-12-13 06:53:57.670786] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x824510) 00:14:53.673 [2024-12-13 06:53:57.670793] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.673 [2024-12-13 06:53:57.670809] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x870cc0, cid 3, qid 0 00:14:53.673 [2024-12-13 06:53:57.670855] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.673 [2024-12-13 06:53:57.670861] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.673 [2024-12-13 06:53:57.670865] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.673 [2024-12-13 06:53:57.670869] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x870cc0) on tqpair=0x824510 00:14:53.673 [2024-12-13 06:53:57.670879] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.673 [2024-12-13 06:53:57.670883] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.673 [2024-12-13 06:53:57.670887] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x824510) 00:14:53.673 [2024-12-13 06:53:57.670894] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.673 [2024-12-13 06:53:57.670910] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x870cc0, cid 3, qid 0 00:14:53.673 [2024-12-13 06:53:57.670956] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.673 [2024-12-13 06:53:57.670962] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.673 [2024-12-13 06:53:57.670966] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.673 [2024-12-13 06:53:57.670970] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x870cc0) on tqpair=0x824510 00:14:53.673 [2024-12-13 06:53:57.670980] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.673 [2024-12-13 06:53:57.670984] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.673 [2024-12-13 06:53:57.670988] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x824510) 00:14:53.673 [2024-12-13 06:53:57.670995] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.673 [2024-12-13 06:53:57.671011] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x870cc0, cid 3, qid 0 00:14:53.673 [2024-12-13 06:53:57.671058] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.673 [2024-12-13 06:53:57.671065] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.673 [2024-12-13 06:53:57.671069] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.673 [2024-12-13 06:53:57.671073] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x870cc0) on tqpair=0x824510 00:14:53.673 [2024-12-13 06:53:57.671083] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.673 [2024-12-13 06:53:57.671087] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.673 [2024-12-13 06:53:57.671091] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x824510) 00:14:53.673 [2024-12-13 06:53:57.671098] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.673 [2024-12-13 06:53:57.671114] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x870cc0, cid 3, qid 0 00:14:53.673 [2024-12-13 06:53:57.671157] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.673 [2024-12-13 06:53:57.671163] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.673 [2024-12-13 06:53:57.671167] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.673 [2024-12-13 06:53:57.671171] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x870cc0) on tqpair=0x824510 00:14:53.673 [2024-12-13 06:53:57.671181] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.673 [2024-12-13 06:53:57.671185] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.673 [2024-12-13 06:53:57.671189] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x824510) 00:14:53.673 [2024-12-13 06:53:57.671196] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.673 [2024-12-13 06:53:57.671212] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x870cc0, cid 3, qid 0 00:14:53.673 [2024-12-13 06:53:57.671261] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.673 [2024-12-13 06:53:57.671267] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.673 [2024-12-13 06:53:57.671271] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.673 [2024-12-13 06:53:57.671275] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x870cc0) on tqpair=0x824510 00:14:53.673 [2024-12-13 06:53:57.671285] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.673 [2024-12-13 06:53:57.671289] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.673 [2024-12-13 06:53:57.671293] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x824510) 00:14:53.673 [2024-12-13 06:53:57.671300] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.673 [2024-12-13 06:53:57.671316] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x870cc0, cid 3, qid 0 00:14:53.673 [2024-12-13 06:53:57.675406] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.673 [2024-12-13 06:53:57.675427] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.673 [2024-12-13 06:53:57.675448] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.673 [2024-12-13 06:53:57.675452] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x870cc0) on tqpair=0x824510 00:14:53.673 [2024-12-13 06:53:57.675468] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.673 [2024-12-13 06:53:57.675473] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.673 [2024-12-13 06:53:57.675477] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x824510) 00:14:53.673 [2024-12-13 06:53:57.675486] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.673 [2024-12-13 06:53:57.675511] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x870cc0, cid 3, qid 0 00:14:53.673 [2024-12-13 06:53:57.675569] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.673 [2024-12-13 06:53:57.675575] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.673 [2024-12-13 06:53:57.675579] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.673 [2024-12-13 06:53:57.675583] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x870cc0) on tqpair=0x824510 00:14:53.673 [2024-12-13 06:53:57.675591] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:14:53.673 0 Kelvin (-273 Celsius) 00:14:53.673 Available Spare: 0% 00:14:53.673 Available Spare Threshold: 0% 00:14:53.673 Life Percentage Used: 0% 00:14:53.673 Data Units Read: 0 00:14:53.673 Data Units Written: 0 00:14:53.673 Host Read Commands: 0 00:14:53.673 Host Write Commands: 0 00:14:53.673 Controller Busy Time: 0 minutes 00:14:53.673 Power Cycles: 0 00:14:53.673 Power On Hours: 0 hours 00:14:53.673 Unsafe Shutdowns: 0 00:14:53.673 Unrecoverable Media Errors: 0 00:14:53.673 Lifetime Error Log Entries: 0 00:14:53.673 Warning Temperature Time: 0 minutes 00:14:53.673 Critical Temperature Time: 0 minutes 00:14:53.673 00:14:53.673 Number of Queues 00:14:53.673 ================ 00:14:53.673 Number of I/O Submission Queues: 127 00:14:53.673 Number of I/O Completion Queues: 127 00:14:53.673 00:14:53.673 Active Namespaces 00:14:53.673 ================= 00:14:53.673 Namespace ID:1 00:14:53.673 Error Recovery Timeout: Unlimited 00:14:53.673 Command Set Identifier: NVM (00h) 00:14:53.673 Deallocate: Supported 00:14:53.673 Deallocated/Unwritten Error: Not Supported 00:14:53.673 Deallocated Read Value: Unknown 00:14:53.673 Deallocate in Write Zeroes: Not Supported 00:14:53.673 Deallocated Guard Field: 0xFFFF 00:14:53.673 Flush: Supported 00:14:53.673 Reservation: Supported 00:14:53.673 Namespace Sharing Capabilities: Multiple Controllers 00:14:53.673 Size (in LBAs): 131072 (0GiB) 00:14:53.673 Capacity (in LBAs): 131072 (0GiB) 00:14:53.673 Utilization (in LBAs): 131072 (0GiB) 00:14:53.673 NGUID: ABCDEF0123456789ABCDEF0123456789 00:14:53.673 EUI64: ABCDEF0123456789 00:14:53.673 UUID: 62041565-2aae-4a8e-bf98-f8369d22019c 00:14:53.673 Thin Provisioning: Not Supported 00:14:53.673 Per-NS Atomic Units: Yes 00:14:53.673 Atomic Boundary Size (Normal): 0 00:14:53.673 Atomic Boundary Size (PFail): 0 00:14:53.673 Atomic Boundary Offset: 0 00:14:53.673 Maximum Single Source Range Length: 65535 00:14:53.673 Maximum Copy Length: 65535 00:14:53.673 Maximum Source Range Count: 1 00:14:53.673 NGUID/EUI64 Never Reused: No 00:14:53.673 Namespace Write Protected: No 00:14:53.673 Number of LBA Formats: 1 00:14:53.673 Current LBA Format: LBA Format #00 00:14:53.673 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:53.673 00:14:53.673 06:53:57 -- host/identify.sh@51 -- # sync 00:14:53.673 06:53:57 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:53.673 06:53:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.673 06:53:57 -- common/autotest_common.sh@10 -- # set +x 00:14:53.673 06:53:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.673 06:53:57 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:14:53.673 06:53:57 -- host/identify.sh@56 -- # nvmftestfini 00:14:53.673 06:53:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:53.673 06:53:57 -- nvmf/common.sh@116 -- # sync 00:14:53.673 06:53:57 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:53.673 06:53:57 -- nvmf/common.sh@119 -- # set +e 00:14:53.674 06:53:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:53.674 06:53:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:53.674 rmmod nvme_tcp 00:14:53.674 rmmod nvme_fabrics 00:14:53.674 rmmod nvme_keyring 00:14:53.674 06:53:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:53.674 06:53:57 -- nvmf/common.sh@123 -- # set -e 00:14:53.674 06:53:57 -- nvmf/common.sh@124 -- # return 0 00:14:53.674 06:53:57 -- nvmf/common.sh@477 -- # '[' -n 80293 ']' 00:14:53.674 06:53:57 -- nvmf/common.sh@478 -- # killprocess 80293 00:14:53.674 06:53:57 -- common/autotest_common.sh@936 -- # '[' -z 80293 ']' 00:14:53.674 06:53:57 -- common/autotest_common.sh@940 -- # kill -0 80293 00:14:53.674 06:53:57 -- common/autotest_common.sh@941 -- # uname 00:14:53.674 06:53:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:53.674 06:53:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80293 00:14:53.674 killing process with pid 80293 00:14:53.674 06:53:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:53.674 06:53:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:53.674 06:53:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80293' 00:14:53.674 06:53:57 -- common/autotest_common.sh@955 -- # kill 80293 00:14:53.674 [2024-12-13 06:53:57.844608] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:14:53.674 06:53:57 -- common/autotest_common.sh@960 -- # wait 80293 00:14:53.674 06:53:57 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:53.674 06:53:57 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:53.674 06:53:57 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:53.674 06:53:57 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:53.674 06:53:57 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:53.674 06:53:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:53.674 06:53:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:53.674 06:53:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:53.674 06:53:58 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:53.674 00:14:53.674 real 0m1.726s 00:14:53.674 user 0m3.872s 00:14:53.674 sys 0m0.544s 00:14:53.674 ************************************ 00:14:53.674 END TEST nvmf_identify 00:14:53.674 ************************************ 00:14:53.674 06:53:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:53.674 06:53:58 -- common/autotest_common.sh@10 -- # set +x 00:14:53.674 06:53:58 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:14:53.674 06:53:58 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:53.674 06:53:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:53.674 06:53:58 -- common/autotest_common.sh@10 -- # set +x 00:14:53.674 ************************************ 00:14:53.674 START TEST nvmf_perf 00:14:53.674 ************************************ 00:14:53.674 06:53:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:14:53.674 * Looking for test storage... 00:14:53.674 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:53.674 06:53:58 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:53.674 06:53:58 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:53.674 06:53:58 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:53.934 06:53:58 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:53.934 06:53:58 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:53.934 06:53:58 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:53.934 06:53:58 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:53.934 06:53:58 -- scripts/common.sh@335 -- # IFS=.-: 00:14:53.934 06:53:58 -- scripts/common.sh@335 -- # read -ra ver1 00:14:53.934 06:53:58 -- scripts/common.sh@336 -- # IFS=.-: 00:14:53.934 06:53:58 -- scripts/common.sh@336 -- # read -ra ver2 00:14:53.934 06:53:58 -- scripts/common.sh@337 -- # local 'op=<' 00:14:53.934 06:53:58 -- scripts/common.sh@339 -- # ver1_l=2 00:14:53.934 06:53:58 -- scripts/common.sh@340 -- # ver2_l=1 00:14:53.934 06:53:58 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:53.934 06:53:58 -- scripts/common.sh@343 -- # case "$op" in 00:14:53.934 06:53:58 -- scripts/common.sh@344 -- # : 1 00:14:53.934 06:53:58 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:53.934 06:53:58 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:53.934 06:53:58 -- scripts/common.sh@364 -- # decimal 1 00:14:53.934 06:53:58 -- scripts/common.sh@352 -- # local d=1 00:14:53.934 06:53:58 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:53.934 06:53:58 -- scripts/common.sh@354 -- # echo 1 00:14:53.934 06:53:58 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:53.934 06:53:58 -- scripts/common.sh@365 -- # decimal 2 00:14:53.934 06:53:58 -- scripts/common.sh@352 -- # local d=2 00:14:53.934 06:53:58 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:53.934 06:53:58 -- scripts/common.sh@354 -- # echo 2 00:14:53.934 06:53:58 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:53.934 06:53:58 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:53.934 06:53:58 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:53.934 06:53:58 -- scripts/common.sh@367 -- # return 0 00:14:53.934 06:53:58 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:53.934 06:53:58 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:53.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.934 --rc genhtml_branch_coverage=1 00:14:53.934 --rc genhtml_function_coverage=1 00:14:53.934 --rc genhtml_legend=1 00:14:53.934 --rc geninfo_all_blocks=1 00:14:53.934 --rc geninfo_unexecuted_blocks=1 00:14:53.934 00:14:53.934 ' 00:14:53.934 06:53:58 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:53.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.934 --rc genhtml_branch_coverage=1 00:14:53.934 --rc genhtml_function_coverage=1 00:14:53.934 --rc genhtml_legend=1 00:14:53.934 --rc geninfo_all_blocks=1 00:14:53.934 --rc geninfo_unexecuted_blocks=1 00:14:53.934 00:14:53.934 ' 00:14:53.934 06:53:58 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:53.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.934 --rc genhtml_branch_coverage=1 00:14:53.934 --rc genhtml_function_coverage=1 00:14:53.934 --rc genhtml_legend=1 00:14:53.934 --rc geninfo_all_blocks=1 00:14:53.934 --rc geninfo_unexecuted_blocks=1 00:14:53.934 00:14:53.934 ' 00:14:53.934 06:53:58 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:53.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.934 --rc genhtml_branch_coverage=1 00:14:53.934 --rc genhtml_function_coverage=1 00:14:53.934 --rc genhtml_legend=1 00:14:53.934 --rc geninfo_all_blocks=1 00:14:53.934 --rc geninfo_unexecuted_blocks=1 00:14:53.934 00:14:53.934 ' 00:14:53.934 06:53:58 -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:53.934 06:53:58 -- nvmf/common.sh@7 -- # uname -s 00:14:53.934 06:53:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:53.934 06:53:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:53.934 06:53:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:53.934 06:53:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:53.934 06:53:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:53.934 06:53:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:53.934 06:53:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:53.934 06:53:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:53.934 06:53:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:53.934 06:53:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:53.934 06:53:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:657f0c9c-3891-4064-9841-3d87a573b6e7 00:14:53.934 06:53:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=657f0c9c-3891-4064-9841-3d87a573b6e7 00:14:53.934 06:53:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:53.934 06:53:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:53.934 06:53:58 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:53.934 06:53:58 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:53.934 06:53:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:53.934 06:53:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:53.934 06:53:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:53.934 06:53:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.934 06:53:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.934 06:53:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.934 06:53:58 -- paths/export.sh@5 -- # export PATH 00:14:53.934 06:53:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.934 06:53:58 -- nvmf/common.sh@46 -- # : 0 00:14:53.934 06:53:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:53.934 06:53:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:53.934 06:53:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:53.934 06:53:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:53.934 06:53:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:53.934 06:53:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:53.934 06:53:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:53.934 06:53:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:53.934 06:53:58 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:53.934 06:53:58 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:53.934 06:53:58 -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:53.934 06:53:58 -- host/perf.sh@17 -- # nvmftestinit 00:14:53.934 06:53:58 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:53.934 06:53:58 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:53.934 06:53:58 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:53.934 06:53:58 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:53.934 06:53:58 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:53.934 06:53:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:53.934 06:53:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:53.934 06:53:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:53.934 06:53:58 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:53.934 06:53:58 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:53.934 06:53:58 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:53.934 06:53:58 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:53.934 06:53:58 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:53.934 06:53:58 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:53.934 06:53:58 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:53.934 06:53:58 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:53.934 06:53:58 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:53.934 06:53:58 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:53.934 06:53:58 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:53.934 06:53:58 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:53.934 06:53:58 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:53.934 06:53:58 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:53.934 06:53:58 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:53.934 06:53:58 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:53.934 06:53:58 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:53.934 06:53:58 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:53.934 06:53:58 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:53.934 06:53:58 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:53.934 Cannot find device "nvmf_tgt_br" 00:14:53.934 06:53:58 -- nvmf/common.sh@154 -- # true 00:14:53.934 06:53:58 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:53.934 Cannot find device "nvmf_tgt_br2" 00:14:53.934 06:53:58 -- nvmf/common.sh@155 -- # true 00:14:53.934 06:53:58 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:53.935 06:53:58 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:53.935 Cannot find device "nvmf_tgt_br" 00:14:53.935 06:53:58 -- nvmf/common.sh@157 -- # true 00:14:53.935 06:53:58 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:53.935 Cannot find device "nvmf_tgt_br2" 00:14:53.935 06:53:58 -- nvmf/common.sh@158 -- # true 00:14:53.935 06:53:58 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:53.935 06:53:58 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:53.935 06:53:58 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:53.935 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:53.935 06:53:58 -- nvmf/common.sh@161 -- # true 00:14:53.935 06:53:58 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:53.935 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:53.935 06:53:58 -- nvmf/common.sh@162 -- # true 00:14:53.935 06:53:58 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:53.935 06:53:58 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:53.935 06:53:58 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:53.935 06:53:58 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:53.935 06:53:58 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:54.194 06:53:58 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:54.194 06:53:58 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:54.194 06:53:58 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:54.194 06:53:58 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:54.194 06:53:58 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:54.194 06:53:58 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:54.194 06:53:58 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:54.194 06:53:58 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:54.194 06:53:58 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:54.194 06:53:58 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:54.194 06:53:58 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:54.194 06:53:58 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:54.194 06:53:58 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:54.194 06:53:58 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:54.194 06:53:58 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:54.194 06:53:58 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:54.194 06:53:58 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:54.194 06:53:58 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:54.194 06:53:58 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:54.194 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:54.194 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:14:54.194 00:14:54.194 --- 10.0.0.2 ping statistics --- 00:14:54.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:54.194 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:14:54.194 06:53:58 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:54.194 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:54.194 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:14:54.194 00:14:54.194 --- 10.0.0.3 ping statistics --- 00:14:54.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:54.194 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:14:54.194 06:53:58 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:54.194 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:54.194 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:14:54.194 00:14:54.194 --- 10.0.0.1 ping statistics --- 00:14:54.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:54.194 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:14:54.194 06:53:58 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:54.194 06:53:58 -- nvmf/common.sh@421 -- # return 0 00:14:54.194 06:53:58 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:54.194 06:53:58 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:54.194 06:53:58 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:54.194 06:53:58 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:54.194 06:53:58 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:54.194 06:53:58 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:54.194 06:53:58 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:54.194 06:53:58 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:14:54.194 06:53:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:54.194 06:53:58 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:54.194 06:53:58 -- common/autotest_common.sh@10 -- # set +x 00:14:54.194 06:53:58 -- nvmf/common.sh@469 -- # nvmfpid=80497 00:14:54.194 06:53:58 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:54.194 06:53:58 -- nvmf/common.sh@470 -- # waitforlisten 80497 00:14:54.194 06:53:58 -- common/autotest_common.sh@829 -- # '[' -z 80497 ']' 00:14:54.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:54.194 06:53:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:54.194 06:53:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:54.194 06:53:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:54.194 06:53:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:54.194 06:53:58 -- common/autotest_common.sh@10 -- # set +x 00:14:54.194 [2024-12-13 06:53:58.680165] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:54.194 [2024-12-13 06:53:58.680526] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:54.454 [2024-12-13 06:53:58.820321] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:54.454 [2024-12-13 06:53:58.852587] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:54.454 [2024-12-13 06:53:58.852980] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:54.454 [2024-12-13 06:53:58.853002] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:54.454 [2024-12-13 06:53:58.853011] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:54.454 [2024-12-13 06:53:58.853162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:54.454 [2024-12-13 06:53:58.853421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:54.454 [2024-12-13 06:53:58.853493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:54.454 [2024-12-13 06:53:58.853494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.454 06:53:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:54.454 06:53:58 -- common/autotest_common.sh@862 -- # return 0 00:14:54.454 06:53:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:54.454 06:53:58 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:54.454 06:53:58 -- common/autotest_common.sh@10 -- # set +x 00:14:54.454 06:53:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:54.454 06:53:58 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:54.454 06:53:58 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:14:55.021 06:53:59 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:14:55.021 06:53:59 -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:14:55.279 06:53:59 -- host/perf.sh@30 -- # local_nvme_trid=0000:00:06.0 00:14:55.279 06:53:59 -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:55.538 06:53:59 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:14:55.538 06:53:59 -- host/perf.sh@33 -- # '[' -n 0000:00:06.0 ']' 00:14:55.538 06:53:59 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:14:55.538 06:53:59 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:14:55.538 06:53:59 -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:55.797 [2024-12-13 06:54:00.202931] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:55.797 06:54:00 -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:56.055 06:54:00 -- host/perf.sh@45 -- # for bdev in $bdevs 00:14:56.055 06:54:00 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:56.313 06:54:00 -- host/perf.sh@45 -- # for bdev in $bdevs 00:14:56.313 06:54:00 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:14:56.572 06:54:00 -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:56.829 [2024-12-13 06:54:01.168355] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:56.829 06:54:01 -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:57.087 06:54:01 -- host/perf.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:14:57.087 06:54:01 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:14:57.087 06:54:01 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:14:57.087 06:54:01 -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:14:58.463 Initializing NVMe Controllers 00:14:58.463 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:14:58.463 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:14:58.463 Initialization complete. Launching workers. 00:14:58.463 ======================================================== 00:14:58.463 Latency(us) 00:14:58.463 Device Information : IOPS MiB/s Average min max 00:14:58.463 PCIE (0000:00:06.0) NSID 1 from core 0: 23647.98 92.37 1352.50 339.07 8899.10 00:14:58.463 ======================================================== 00:14:58.463 Total : 23647.98 92.37 1352.50 339.07 8899.10 00:14:58.463 00:14:58.463 06:54:02 -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:59.399 Initializing NVMe Controllers 00:14:59.399 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:59.399 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:59.399 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:59.399 Initialization complete. Launching workers. 00:14:59.399 ======================================================== 00:14:59.399 Latency(us) 00:14:59.399 Device Information : IOPS MiB/s Average min max 00:14:59.399 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3653.68 14.27 273.38 101.65 7222.87 00:14:59.399 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.75 0.49 8071.22 5974.52 11991.65 00:14:59.399 ======================================================== 00:14:59.399 Total : 3778.43 14.76 530.83 101.65 11991.65 00:14:59.399 00:14:59.399 06:54:03 -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:00.805 Initializing NVMe Controllers 00:15:00.805 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:00.805 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:00.805 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:00.805 Initialization complete. Launching workers. 00:15:00.805 ======================================================== 00:15:00.805 Latency(us) 00:15:00.805 Device Information : IOPS MiB/s Average min max 00:15:00.805 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8671.53 33.87 3691.54 491.68 9551.32 00:15:00.805 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3949.31 15.43 8146.08 6760.79 16481.85 00:15:00.805 ======================================================== 00:15:00.805 Total : 12620.84 49.30 5085.45 491.68 16481.85 00:15:00.805 00:15:00.805 06:54:05 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:15:00.805 06:54:05 -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:03.336 Initializing NVMe Controllers 00:15:03.336 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:03.336 Controller IO queue size 128, less than required. 00:15:03.336 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:03.336 Controller IO queue size 128, less than required. 00:15:03.336 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:03.336 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:03.336 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:03.336 Initialization complete. Launching workers. 00:15:03.336 ======================================================== 00:15:03.336 Latency(us) 00:15:03.336 Device Information : IOPS MiB/s Average min max 00:15:03.336 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1657.99 414.50 77653.71 41552.93 161474.89 00:15:03.336 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 650.50 162.62 208145.12 107705.34 305942.58 00:15:03.336 ======================================================== 00:15:03.336 Total : 2308.48 577.12 114424.19 41552.93 305942.58 00:15:03.336 00:15:03.336 06:54:07 -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:15:03.594 No valid NVMe controllers or AIO or URING devices found 00:15:03.594 Initializing NVMe Controllers 00:15:03.594 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:03.594 Controller IO queue size 128, less than required. 00:15:03.594 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:03.594 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:15:03.594 Controller IO queue size 128, less than required. 00:15:03.594 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:03.594 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:15:03.594 WARNING: Some requested NVMe devices were skipped 00:15:03.594 06:54:07 -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:15:06.127 Initializing NVMe Controllers 00:15:06.127 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:06.127 Controller IO queue size 128, less than required. 00:15:06.127 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:06.127 Controller IO queue size 128, less than required. 00:15:06.127 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:06.127 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:06.127 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:06.127 Initialization complete. Launching workers. 00:15:06.127 00:15:06.127 ==================== 00:15:06.127 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:15:06.127 TCP transport: 00:15:06.127 polls: 8022 00:15:06.127 idle_polls: 0 00:15:06.127 sock_completions: 8022 00:15:06.127 nvme_completions: 6690 00:15:06.127 submitted_requests: 10161 00:15:06.127 queued_requests: 1 00:15:06.127 00:15:06.127 ==================== 00:15:06.127 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:15:06.127 TCP transport: 00:15:06.127 polls: 7272 00:15:06.127 idle_polls: 0 00:15:06.127 sock_completions: 7272 00:15:06.127 nvme_completions: 6673 00:15:06.127 submitted_requests: 10125 00:15:06.127 queued_requests: 1 00:15:06.127 ======================================================== 00:15:06.127 Latency(us) 00:15:06.127 Device Information : IOPS MiB/s Average min max 00:15:06.127 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1735.87 433.97 74819.08 35475.25 121022.12 00:15:06.127 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1731.87 432.97 75193.71 36259.98 127112.46 00:15:06.127 ======================================================== 00:15:06.127 Total : 3467.74 866.94 75006.18 35475.25 127112.46 00:15:06.127 00:15:06.127 06:54:10 -- host/perf.sh@66 -- # sync 00:15:06.127 06:54:10 -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:06.386 06:54:10 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:15:06.386 06:54:10 -- host/perf.sh@71 -- # '[' -n 0000:00:06.0 ']' 00:15:06.386 06:54:10 -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:15:06.645 06:54:11 -- host/perf.sh@72 -- # ls_guid=041d2345-69bc-44a2-8b2b-7aa41d77e6a1 00:15:06.645 06:54:11 -- host/perf.sh@73 -- # get_lvs_free_mb 041d2345-69bc-44a2-8b2b-7aa41d77e6a1 00:15:06.645 06:54:11 -- common/autotest_common.sh@1353 -- # local lvs_uuid=041d2345-69bc-44a2-8b2b-7aa41d77e6a1 00:15:06.645 06:54:11 -- common/autotest_common.sh@1354 -- # local lvs_info 00:15:06.645 06:54:11 -- common/autotest_common.sh@1355 -- # local fc 00:15:06.645 06:54:11 -- common/autotest_common.sh@1356 -- # local cs 00:15:06.646 06:54:11 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:06.905 06:54:11 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:15:06.905 { 00:15:06.905 "uuid": "041d2345-69bc-44a2-8b2b-7aa41d77e6a1", 00:15:06.905 "name": "lvs_0", 00:15:06.905 "base_bdev": "Nvme0n1", 00:15:06.905 "total_data_clusters": 1278, 00:15:06.905 "free_clusters": 1278, 00:15:06.905 "block_size": 4096, 00:15:06.905 "cluster_size": 4194304 00:15:06.905 } 00:15:06.905 ]' 00:15:06.905 06:54:11 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="041d2345-69bc-44a2-8b2b-7aa41d77e6a1") .free_clusters' 00:15:06.905 06:54:11 -- common/autotest_common.sh@1358 -- # fc=1278 00:15:06.905 06:54:11 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="041d2345-69bc-44a2-8b2b-7aa41d77e6a1") .cluster_size' 00:15:07.164 5112 00:15:07.164 06:54:11 -- common/autotest_common.sh@1359 -- # cs=4194304 00:15:07.164 06:54:11 -- common/autotest_common.sh@1362 -- # free_mb=5112 00:15:07.164 06:54:11 -- common/autotest_common.sh@1363 -- # echo 5112 00:15:07.164 06:54:11 -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:15:07.164 06:54:11 -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 041d2345-69bc-44a2-8b2b-7aa41d77e6a1 lbd_0 5112 00:15:07.423 06:54:11 -- host/perf.sh@80 -- # lb_guid=f60b93e4-9f00-4458-9688-78d0a02a76b7 00:15:07.423 06:54:11 -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore f60b93e4-9f00-4458-9688-78d0a02a76b7 lvs_n_0 00:15:07.682 06:54:12 -- host/perf.sh@83 -- # ls_nested_guid=fe20473a-ea88-40ab-a69d-55ba6b124e8a 00:15:07.682 06:54:12 -- host/perf.sh@84 -- # get_lvs_free_mb fe20473a-ea88-40ab-a69d-55ba6b124e8a 00:15:07.682 06:54:12 -- common/autotest_common.sh@1353 -- # local lvs_uuid=fe20473a-ea88-40ab-a69d-55ba6b124e8a 00:15:07.682 06:54:12 -- common/autotest_common.sh@1354 -- # local lvs_info 00:15:07.682 06:54:12 -- common/autotest_common.sh@1355 -- # local fc 00:15:07.682 06:54:12 -- common/autotest_common.sh@1356 -- # local cs 00:15:07.682 06:54:12 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:07.941 06:54:12 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:15:07.941 { 00:15:07.941 "uuid": "041d2345-69bc-44a2-8b2b-7aa41d77e6a1", 00:15:07.941 "name": "lvs_0", 00:15:07.941 "base_bdev": "Nvme0n1", 00:15:07.941 "total_data_clusters": 1278, 00:15:07.941 "free_clusters": 0, 00:15:07.941 "block_size": 4096, 00:15:07.941 "cluster_size": 4194304 00:15:07.941 }, 00:15:07.941 { 00:15:07.941 "uuid": "fe20473a-ea88-40ab-a69d-55ba6b124e8a", 00:15:07.941 "name": "lvs_n_0", 00:15:07.941 "base_bdev": "f60b93e4-9f00-4458-9688-78d0a02a76b7", 00:15:07.941 "total_data_clusters": 1276, 00:15:07.941 "free_clusters": 1276, 00:15:07.941 "block_size": 4096, 00:15:07.941 "cluster_size": 4194304 00:15:07.941 } 00:15:07.941 ]' 00:15:07.941 06:54:12 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="fe20473a-ea88-40ab-a69d-55ba6b124e8a") .free_clusters' 00:15:07.941 06:54:12 -- common/autotest_common.sh@1358 -- # fc=1276 00:15:07.941 06:54:12 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="fe20473a-ea88-40ab-a69d-55ba6b124e8a") .cluster_size' 00:15:07.941 5104 00:15:07.941 06:54:12 -- common/autotest_common.sh@1359 -- # cs=4194304 00:15:07.941 06:54:12 -- common/autotest_common.sh@1362 -- # free_mb=5104 00:15:07.941 06:54:12 -- common/autotest_common.sh@1363 -- # echo 5104 00:15:07.941 06:54:12 -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:15:07.941 06:54:12 -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u fe20473a-ea88-40ab-a69d-55ba6b124e8a lbd_nest_0 5104 00:15:08.200 06:54:12 -- host/perf.sh@88 -- # lb_nested_guid=c0e86c2d-a08a-4fe8-af59-f68094012e2f 00:15:08.200 06:54:12 -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:08.459 06:54:12 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:15:08.459 06:54:12 -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 c0e86c2d-a08a-4fe8-af59-f68094012e2f 00:15:08.717 06:54:13 -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:08.976 06:54:13 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:15:08.976 06:54:13 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:15:08.976 06:54:13 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:15:08.976 06:54:13 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:15:08.976 06:54:13 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:09.235 No valid NVMe controllers or AIO or URING devices found 00:15:09.235 Initializing NVMe Controllers 00:15:09.235 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:09.235 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:15:09.235 WARNING: Some requested NVMe devices were skipped 00:15:09.235 06:54:13 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:15:09.235 06:54:13 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:21.474 Initializing NVMe Controllers 00:15:21.474 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:21.474 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:21.474 Initialization complete. Launching workers. 00:15:21.474 ======================================================== 00:15:21.474 Latency(us) 00:15:21.474 Device Information : IOPS MiB/s Average min max 00:15:21.474 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 943.30 117.91 1058.87 307.95 8660.49 00:15:21.474 ======================================================== 00:15:21.474 Total : 943.30 117.91 1058.87 307.95 8660.49 00:15:21.474 00:15:21.474 06:54:23 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:15:21.474 06:54:23 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:15:21.474 06:54:23 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:21.474 No valid NVMe controllers or AIO or URING devices found 00:15:21.474 Initializing NVMe Controllers 00:15:21.474 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:21.474 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:15:21.474 WARNING: Some requested NVMe devices were skipped 00:15:21.474 06:54:24 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:15:21.474 06:54:24 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:31.477 Initializing NVMe Controllers 00:15:31.477 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:31.477 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:31.477 Initialization complete. Launching workers. 00:15:31.477 ======================================================== 00:15:31.477 Latency(us) 00:15:31.477 Device Information : IOPS MiB/s Average min max 00:15:31.477 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1340.20 167.52 23926.62 5207.38 55949.93 00:15:31.477 ======================================================== 00:15:31.477 Total : 1340.20 167.52 23926.62 5207.38 55949.93 00:15:31.477 00:15:31.477 06:54:34 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:15:31.477 06:54:34 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:15:31.477 06:54:34 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:31.477 No valid NVMe controllers or AIO or URING devices found 00:15:31.477 Initializing NVMe Controllers 00:15:31.477 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:31.477 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:15:31.477 WARNING: Some requested NVMe devices were skipped 00:15:31.477 06:54:34 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:15:31.477 06:54:34 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:41.453 Initializing NVMe Controllers 00:15:41.453 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:41.453 Controller IO queue size 128, less than required. 00:15:41.453 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:41.453 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:41.453 Initialization complete. Launching workers. 00:15:41.453 ======================================================== 00:15:41.453 Latency(us) 00:15:41.453 Device Information : IOPS MiB/s Average min max 00:15:41.453 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4046.89 505.86 31683.13 12251.52 59840.12 00:15:41.453 ======================================================== 00:15:41.453 Total : 4046.89 505.86 31683.13 12251.52 59840.12 00:15:41.453 00:15:41.453 06:54:45 -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:41.453 06:54:45 -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete c0e86c2d-a08a-4fe8-af59-f68094012e2f 00:15:41.453 06:54:45 -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:15:41.711 06:54:46 -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete f60b93e4-9f00-4458-9688-78d0a02a76b7 00:15:41.970 06:54:46 -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:15:42.229 06:54:46 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:15:42.229 06:54:46 -- host/perf.sh@114 -- # nvmftestfini 00:15:42.229 06:54:46 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:42.229 06:54:46 -- nvmf/common.sh@116 -- # sync 00:15:42.229 06:54:46 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:42.229 06:54:46 -- nvmf/common.sh@119 -- # set +e 00:15:42.229 06:54:46 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:42.229 06:54:46 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:42.229 rmmod nvme_tcp 00:15:42.229 rmmod nvme_fabrics 00:15:42.229 rmmod nvme_keyring 00:15:42.229 06:54:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:42.229 06:54:46 -- nvmf/common.sh@123 -- # set -e 00:15:42.229 06:54:46 -- nvmf/common.sh@124 -- # return 0 00:15:42.229 06:54:46 -- nvmf/common.sh@477 -- # '[' -n 80497 ']' 00:15:42.229 06:54:46 -- nvmf/common.sh@478 -- # killprocess 80497 00:15:42.229 06:54:46 -- common/autotest_common.sh@936 -- # '[' -z 80497 ']' 00:15:42.229 06:54:46 -- common/autotest_common.sh@940 -- # kill -0 80497 00:15:42.229 06:54:46 -- common/autotest_common.sh@941 -- # uname 00:15:42.229 06:54:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:42.229 06:54:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80497 00:15:42.229 killing process with pid 80497 00:15:42.229 06:54:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:42.229 06:54:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:42.229 06:54:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80497' 00:15:42.229 06:54:46 -- common/autotest_common.sh@955 -- # kill 80497 00:15:42.229 06:54:46 -- common/autotest_common.sh@960 -- # wait 80497 00:15:43.607 06:54:47 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:43.607 06:54:47 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:43.607 06:54:47 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:43.607 06:54:47 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:43.607 06:54:47 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:43.607 06:54:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:43.607 06:54:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:43.607 06:54:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:43.607 06:54:48 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:43.607 ************************************ 00:15:43.607 END TEST nvmf_perf 00:15:43.607 ************************************ 00:15:43.607 00:15:43.607 real 0m49.947s 00:15:43.607 user 3m8.436s 00:15:43.607 sys 0m12.580s 00:15:43.607 06:54:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:43.607 06:54:48 -- common/autotest_common.sh@10 -- # set +x 00:15:43.607 06:54:48 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:15:43.607 06:54:48 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:43.607 06:54:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:43.607 06:54:48 -- common/autotest_common.sh@10 -- # set +x 00:15:43.607 ************************************ 00:15:43.607 START TEST nvmf_fio_host 00:15:43.607 ************************************ 00:15:43.607 06:54:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:15:43.867 * Looking for test storage... 00:15:43.867 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:43.867 06:54:48 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:43.867 06:54:48 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:43.867 06:54:48 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:43.867 06:54:48 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:43.867 06:54:48 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:43.867 06:54:48 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:43.867 06:54:48 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:43.867 06:54:48 -- scripts/common.sh@335 -- # IFS=.-: 00:15:43.867 06:54:48 -- scripts/common.sh@335 -- # read -ra ver1 00:15:43.867 06:54:48 -- scripts/common.sh@336 -- # IFS=.-: 00:15:43.867 06:54:48 -- scripts/common.sh@336 -- # read -ra ver2 00:15:43.867 06:54:48 -- scripts/common.sh@337 -- # local 'op=<' 00:15:43.867 06:54:48 -- scripts/common.sh@339 -- # ver1_l=2 00:15:43.867 06:54:48 -- scripts/common.sh@340 -- # ver2_l=1 00:15:43.867 06:54:48 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:43.867 06:54:48 -- scripts/common.sh@343 -- # case "$op" in 00:15:43.867 06:54:48 -- scripts/common.sh@344 -- # : 1 00:15:43.867 06:54:48 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:43.867 06:54:48 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:43.867 06:54:48 -- scripts/common.sh@364 -- # decimal 1 00:15:43.867 06:54:48 -- scripts/common.sh@352 -- # local d=1 00:15:43.867 06:54:48 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:43.867 06:54:48 -- scripts/common.sh@354 -- # echo 1 00:15:43.867 06:54:48 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:43.867 06:54:48 -- scripts/common.sh@365 -- # decimal 2 00:15:43.867 06:54:48 -- scripts/common.sh@352 -- # local d=2 00:15:43.867 06:54:48 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:43.867 06:54:48 -- scripts/common.sh@354 -- # echo 2 00:15:43.867 06:54:48 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:43.867 06:54:48 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:43.867 06:54:48 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:43.867 06:54:48 -- scripts/common.sh@367 -- # return 0 00:15:43.867 06:54:48 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:43.867 06:54:48 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:43.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:43.867 --rc genhtml_branch_coverage=1 00:15:43.867 --rc genhtml_function_coverage=1 00:15:43.867 --rc genhtml_legend=1 00:15:43.867 --rc geninfo_all_blocks=1 00:15:43.867 --rc geninfo_unexecuted_blocks=1 00:15:43.867 00:15:43.867 ' 00:15:43.867 06:54:48 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:43.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:43.867 --rc genhtml_branch_coverage=1 00:15:43.867 --rc genhtml_function_coverage=1 00:15:43.867 --rc genhtml_legend=1 00:15:43.867 --rc geninfo_all_blocks=1 00:15:43.867 --rc geninfo_unexecuted_blocks=1 00:15:43.867 00:15:43.867 ' 00:15:43.867 06:54:48 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:43.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:43.867 --rc genhtml_branch_coverage=1 00:15:43.867 --rc genhtml_function_coverage=1 00:15:43.867 --rc genhtml_legend=1 00:15:43.867 --rc geninfo_all_blocks=1 00:15:43.867 --rc geninfo_unexecuted_blocks=1 00:15:43.867 00:15:43.867 ' 00:15:43.867 06:54:48 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:43.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:43.867 --rc genhtml_branch_coverage=1 00:15:43.867 --rc genhtml_function_coverage=1 00:15:43.867 --rc genhtml_legend=1 00:15:43.867 --rc geninfo_all_blocks=1 00:15:43.867 --rc geninfo_unexecuted_blocks=1 00:15:43.867 00:15:43.867 ' 00:15:43.867 06:54:48 -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:43.867 06:54:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:43.867 06:54:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:43.867 06:54:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:43.867 06:54:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.867 06:54:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.867 06:54:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.867 06:54:48 -- paths/export.sh@5 -- # export PATH 00:15:43.867 06:54:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.867 06:54:48 -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:43.867 06:54:48 -- nvmf/common.sh@7 -- # uname -s 00:15:43.867 06:54:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:43.867 06:54:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:43.867 06:54:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:43.867 06:54:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:43.867 06:54:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:43.867 06:54:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:43.867 06:54:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:43.867 06:54:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:43.867 06:54:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:43.867 06:54:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:43.867 06:54:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:657f0c9c-3891-4064-9841-3d87a573b6e7 00:15:43.867 06:54:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=657f0c9c-3891-4064-9841-3d87a573b6e7 00:15:43.867 06:54:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:43.867 06:54:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:43.867 06:54:48 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:43.867 06:54:48 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:43.867 06:54:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:43.867 06:54:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:43.867 06:54:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:43.867 06:54:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.868 06:54:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.868 06:54:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.868 06:54:48 -- paths/export.sh@5 -- # export PATH 00:15:43.868 06:54:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.868 06:54:48 -- nvmf/common.sh@46 -- # : 0 00:15:43.868 06:54:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:43.868 06:54:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:43.868 06:54:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:43.868 06:54:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:43.868 06:54:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:43.868 06:54:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:43.868 06:54:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:43.868 06:54:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:43.868 06:54:48 -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:43.868 06:54:48 -- host/fio.sh@14 -- # nvmftestinit 00:15:43.868 06:54:48 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:43.868 06:54:48 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:43.868 06:54:48 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:43.868 06:54:48 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:43.868 06:54:48 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:43.868 06:54:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:43.868 06:54:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:43.868 06:54:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:43.868 06:54:48 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:43.868 06:54:48 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:43.868 06:54:48 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:43.868 06:54:48 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:43.868 06:54:48 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:43.868 06:54:48 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:43.868 06:54:48 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:43.868 06:54:48 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:43.868 06:54:48 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:43.868 06:54:48 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:43.868 06:54:48 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:43.868 06:54:48 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:43.868 06:54:48 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:43.868 06:54:48 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:43.868 06:54:48 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:43.868 06:54:48 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:43.868 06:54:48 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:43.868 06:54:48 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:43.868 06:54:48 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:43.868 06:54:48 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:43.868 Cannot find device "nvmf_tgt_br" 00:15:43.868 06:54:48 -- nvmf/common.sh@154 -- # true 00:15:43.868 06:54:48 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:43.868 Cannot find device "nvmf_tgt_br2" 00:15:43.868 06:54:48 -- nvmf/common.sh@155 -- # true 00:15:43.868 06:54:48 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:43.868 06:54:48 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:43.868 Cannot find device "nvmf_tgt_br" 00:15:43.868 06:54:48 -- nvmf/common.sh@157 -- # true 00:15:43.868 06:54:48 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:43.868 Cannot find device "nvmf_tgt_br2" 00:15:43.868 06:54:48 -- nvmf/common.sh@158 -- # true 00:15:43.868 06:54:48 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:44.127 06:54:48 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:44.127 06:54:48 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:44.127 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:44.127 06:54:48 -- nvmf/common.sh@161 -- # true 00:15:44.127 06:54:48 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:44.127 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:44.127 06:54:48 -- nvmf/common.sh@162 -- # true 00:15:44.127 06:54:48 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:44.127 06:54:48 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:44.127 06:54:48 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:44.127 06:54:48 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:44.127 06:54:48 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:44.127 06:54:48 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:44.127 06:54:48 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:44.127 06:54:48 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:44.127 06:54:48 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:44.127 06:54:48 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:44.127 06:54:48 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:44.127 06:54:48 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:44.127 06:54:48 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:44.127 06:54:48 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:44.127 06:54:48 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:44.127 06:54:48 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:44.127 06:54:48 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:44.127 06:54:48 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:44.127 06:54:48 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:44.127 06:54:48 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:44.127 06:54:48 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:44.127 06:54:48 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:44.127 06:54:48 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:44.127 06:54:48 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:44.127 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:44.127 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:15:44.127 00:15:44.127 --- 10.0.0.2 ping statistics --- 00:15:44.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:44.127 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:15:44.127 06:54:48 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:44.127 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:44.127 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:15:44.127 00:15:44.127 --- 10.0.0.3 ping statistics --- 00:15:44.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:44.127 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:15:44.127 06:54:48 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:44.127 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:44.127 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:15:44.127 00:15:44.127 --- 10.0.0.1 ping statistics --- 00:15:44.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:44.127 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:15:44.127 06:54:48 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:44.127 06:54:48 -- nvmf/common.sh@421 -- # return 0 00:15:44.127 06:54:48 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:44.127 06:54:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:44.127 06:54:48 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:44.127 06:54:48 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:44.127 06:54:48 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:44.127 06:54:48 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:44.127 06:54:48 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:44.127 06:54:48 -- host/fio.sh@16 -- # [[ y != y ]] 00:15:44.127 06:54:48 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:15:44.127 06:54:48 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:44.127 06:54:48 -- common/autotest_common.sh@10 -- # set +x 00:15:44.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:44.386 06:54:48 -- host/fio.sh@24 -- # nvmfpid=81321 00:15:44.386 06:54:48 -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:44.386 06:54:48 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:44.386 06:54:48 -- host/fio.sh@28 -- # waitforlisten 81321 00:15:44.386 06:54:48 -- common/autotest_common.sh@829 -- # '[' -z 81321 ']' 00:15:44.386 06:54:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:44.386 06:54:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:44.386 06:54:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:44.387 06:54:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:44.387 06:54:48 -- common/autotest_common.sh@10 -- # set +x 00:15:44.387 [2024-12-13 06:54:48.685508] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:44.387 [2024-12-13 06:54:48.685573] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:44.387 [2024-12-13 06:54:48.823690] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:44.387 [2024-12-13 06:54:48.865102] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:44.387 [2024-12-13 06:54:48.865538] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:44.387 [2024-12-13 06:54:48.865709] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:44.387 [2024-12-13 06:54:48.865879] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:44.387 [2024-12-13 06:54:48.866084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:44.387 [2024-12-13 06:54:48.866235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:44.387 [2024-12-13 06:54:48.866922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:44.387 [2024-12-13 06:54:48.866984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:45.322 06:54:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:45.322 06:54:49 -- common/autotest_common.sh@862 -- # return 0 00:15:45.322 06:54:49 -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:45.590 [2024-12-13 06:54:49.913310] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:45.590 06:54:49 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:15:45.590 06:54:49 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:45.590 06:54:49 -- common/autotest_common.sh@10 -- # set +x 00:15:45.590 06:54:49 -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:45.851 Malloc1 00:15:45.851 06:54:50 -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:46.109 06:54:50 -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:46.368 06:54:50 -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:46.627 [2024-12-13 06:54:51.014805] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:46.627 06:54:51 -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:46.886 06:54:51 -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:15:46.886 06:54:51 -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:46.886 06:54:51 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:46.886 06:54:51 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:15:46.886 06:54:51 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:46.886 06:54:51 -- common/autotest_common.sh@1328 -- # local sanitizers 00:15:46.886 06:54:51 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:46.886 06:54:51 -- common/autotest_common.sh@1330 -- # shift 00:15:46.886 06:54:51 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:15:46.886 06:54:51 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:46.886 06:54:51 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:46.886 06:54:51 -- common/autotest_common.sh@1334 -- # grep libasan 00:15:46.886 06:54:51 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:46.886 06:54:51 -- common/autotest_common.sh@1334 -- # asan_lib= 00:15:46.886 06:54:51 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:15:46.886 06:54:51 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:46.886 06:54:51 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:46.886 06:54:51 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:15:46.886 06:54:51 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:46.886 06:54:51 -- common/autotest_common.sh@1334 -- # asan_lib= 00:15:46.886 06:54:51 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:15:46.886 06:54:51 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:46.886 06:54:51 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:47.145 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:47.145 fio-3.35 00:15:47.145 Starting 1 thread 00:15:49.679 00:15:49.679 test: (groupid=0, jobs=1): err= 0: pid=81404: Fri Dec 13 06:54:53 2024 00:15:49.679 read: IOPS=9414, BW=36.8MiB/s (38.6MB/s)(73.8MiB/2006msec) 00:15:49.679 slat (nsec): min=1927, max=327822, avg=2520.05, stdev=3489.03 00:15:49.679 clat (usec): min=2632, max=12270, avg=7068.24, stdev=511.45 00:15:49.679 lat (usec): min=2668, max=12272, avg=7070.76, stdev=511.28 00:15:49.679 clat percentiles (usec): 00:15:49.679 | 1.00th=[ 5997], 5.00th=[ 6325], 10.00th=[ 6456], 20.00th=[ 6652], 00:15:49.679 | 30.00th=[ 6849], 40.00th=[ 6915], 50.00th=[ 7046], 60.00th=[ 7177], 00:15:49.679 | 70.00th=[ 7308], 80.00th=[ 7439], 90.00th=[ 7701], 95.00th=[ 7898], 00:15:49.679 | 99.00th=[ 8225], 99.50th=[ 8356], 99.90th=[10290], 99.95th=[11076], 00:15:49.679 | 99.99th=[12256] 00:15:49.679 bw ( KiB/s): min=36728, max=38216, per=99.92%, avg=37628.00, stdev=649.71, samples=4 00:15:49.679 iops : min= 9182, max= 9554, avg=9407.00, stdev=162.43, samples=4 00:15:49.679 write: IOPS=9412, BW=36.8MiB/s (38.6MB/s)(73.8MiB/2006msec); 0 zone resets 00:15:49.679 slat (nsec): min=1969, max=278445, avg=2600.04, stdev=2623.26 00:15:49.679 clat (usec): min=2493, max=12072, avg=6463.95, stdev=465.48 00:15:49.679 lat (usec): min=2507, max=12074, avg=6466.55, stdev=465.43 00:15:49.679 clat percentiles (usec): 00:15:49.679 | 1.00th=[ 5473], 5.00th=[ 5800], 10.00th=[ 5932], 20.00th=[ 6128], 00:15:49.679 | 30.00th=[ 6259], 40.00th=[ 6325], 50.00th=[ 6456], 60.00th=[ 6587], 00:15:49.679 | 70.00th=[ 6652], 80.00th=[ 6783], 90.00th=[ 6980], 95.00th=[ 7177], 00:15:49.679 | 99.00th=[ 7570], 99.50th=[ 7701], 99.90th=[ 9634], 99.95th=[10552], 00:15:49.679 | 99.99th=[11994] 00:15:49.679 bw ( KiB/s): min=37312, max=37888, per=100.00%, avg=37650.00, stdev=262.45, samples=4 00:15:49.679 iops : min= 9328, max= 9472, avg=9412.50, stdev=65.61, samples=4 00:15:49.679 lat (msec) : 4=0.08%, 10=99.81%, 20=0.12% 00:15:49.679 cpu : usr=69.53%, sys=22.79%, ctx=7, majf=0, minf=5 00:15:49.679 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:15:49.679 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:49.679 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:49.679 issued rwts: total=18885,18881,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:49.679 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:49.679 00:15:49.679 Run status group 0 (all jobs): 00:15:49.679 READ: bw=36.8MiB/s (38.6MB/s), 36.8MiB/s-36.8MiB/s (38.6MB/s-38.6MB/s), io=73.8MiB (77.4MB), run=2006-2006msec 00:15:49.679 WRITE: bw=36.8MiB/s (38.6MB/s), 36.8MiB/s-36.8MiB/s (38.6MB/s-38.6MB/s), io=73.8MiB (77.3MB), run=2006-2006msec 00:15:49.679 06:54:53 -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:15:49.679 06:54:53 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:15:49.679 06:54:53 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:15:49.679 06:54:53 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:49.679 06:54:53 -- common/autotest_common.sh@1328 -- # local sanitizers 00:15:49.679 06:54:53 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:49.679 06:54:53 -- common/autotest_common.sh@1330 -- # shift 00:15:49.679 06:54:53 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:15:49.679 06:54:53 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:49.679 06:54:53 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:49.679 06:54:53 -- common/autotest_common.sh@1334 -- # grep libasan 00:15:49.679 06:54:53 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:49.679 06:54:53 -- common/autotest_common.sh@1334 -- # asan_lib= 00:15:49.679 06:54:53 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:15:49.679 06:54:53 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:49.680 06:54:53 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:15:49.680 06:54:53 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:49.680 06:54:53 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:49.680 06:54:53 -- common/autotest_common.sh@1334 -- # asan_lib= 00:15:49.680 06:54:53 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:15:49.680 06:54:53 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:49.680 06:54:53 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:15:49.680 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:15:49.680 fio-3.35 00:15:49.680 Starting 1 thread 00:15:52.310 00:15:52.310 test: (groupid=0, jobs=1): err= 0: pid=81453: Fri Dec 13 06:54:56 2024 00:15:52.310 read: IOPS=8499, BW=133MiB/s (139MB/s)(266MiB/2003msec) 00:15:52.310 slat (usec): min=3, max=122, avg= 3.91, stdev= 2.34 00:15:52.310 clat (usec): min=2285, max=16891, avg=8232.32, stdev=2529.61 00:15:52.310 lat (usec): min=2289, max=16895, avg=8236.23, stdev=2529.73 00:15:52.310 clat percentiles (usec): 00:15:52.310 | 1.00th=[ 3982], 5.00th=[ 4752], 10.00th=[ 5211], 20.00th=[ 5932], 00:15:52.310 | 30.00th=[ 6587], 40.00th=[ 7308], 50.00th=[ 7898], 60.00th=[ 8586], 00:15:52.310 | 70.00th=[ 9503], 80.00th=[10421], 90.00th=[11600], 95.00th=[13042], 00:15:52.310 | 99.00th=[14746], 99.50th=[15664], 99.90th=[16319], 99.95th=[16450], 00:15:52.310 | 99.99th=[16909] 00:15:52.310 bw ( KiB/s): min=62464, max=71840, per=49.38%, avg=67157.33, stdev=4688.01, samples=3 00:15:52.310 iops : min= 3904, max= 4490, avg=4197.33, stdev=293.00, samples=3 00:15:52.310 write: IOPS=4849, BW=75.8MiB/s (79.4MB/s)(141MiB/1862msec); 0 zone resets 00:15:52.310 slat (usec): min=32, max=328, avg=39.69, stdev= 8.84 00:15:52.310 clat (usec): min=6221, max=18823, avg=11974.14, stdev=2015.98 00:15:52.310 lat (usec): min=6257, max=18870, avg=12013.82, stdev=2017.52 00:15:52.310 clat percentiles (usec): 00:15:52.310 | 1.00th=[ 7963], 5.00th=[ 8979], 10.00th=[ 9503], 20.00th=[10159], 00:15:52.310 | 30.00th=[10814], 40.00th=[11338], 50.00th=[11863], 60.00th=[12387], 00:15:52.310 | 70.00th=[12911], 80.00th=[13566], 90.00th=[14746], 95.00th=[15664], 00:15:52.310 | 99.00th=[17171], 99.50th=[17695], 99.90th=[18220], 99.95th=[18482], 00:15:52.310 | 99.99th=[18744] 00:15:52.310 bw ( KiB/s): min=65024, max=74358, per=89.69%, avg=69586.00, stdev=4670.54, samples=3 00:15:52.310 iops : min= 4064, max= 4647, avg=4349.00, stdev=291.72, samples=3 00:15:52.310 lat (msec) : 4=0.68%, 10=55.08%, 20=44.24% 00:15:52.310 cpu : usr=81.32%, sys=13.54%, ctx=3, majf=0, minf=1 00:15:52.310 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:15:52.310 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:52.310 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:52.310 issued rwts: total=17024,9029,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:52.310 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:52.310 00:15:52.310 Run status group 0 (all jobs): 00:15:52.310 READ: bw=133MiB/s (139MB/s), 133MiB/s-133MiB/s (139MB/s-139MB/s), io=266MiB (279MB), run=2003-2003msec 00:15:52.310 WRITE: bw=75.8MiB/s (79.4MB/s), 75.8MiB/s-75.8MiB/s (79.4MB/s-79.4MB/s), io=141MiB (148MB), run=1862-1862msec 00:15:52.310 06:54:56 -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:52.310 06:54:56 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:15:52.310 06:54:56 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:15:52.310 06:54:56 -- host/fio.sh@51 -- # get_nvme_bdfs 00:15:52.310 06:54:56 -- common/autotest_common.sh@1508 -- # bdfs=() 00:15:52.310 06:54:56 -- common/autotest_common.sh@1508 -- # local bdfs 00:15:52.310 06:54:56 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:15:52.310 06:54:56 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:52.310 06:54:56 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:15:52.310 06:54:56 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:15:52.310 06:54:56 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:15:52.310 06:54:56 -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 -i 10.0.0.2 00:15:52.569 Nvme0n1 00:15:52.569 06:54:56 -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:15:52.827 06:54:57 -- host/fio.sh@53 -- # ls_guid=b0f271ab-31d8-4800-a2fb-6ca9d58530db 00:15:52.827 06:54:57 -- host/fio.sh@54 -- # get_lvs_free_mb b0f271ab-31d8-4800-a2fb-6ca9d58530db 00:15:52.827 06:54:57 -- common/autotest_common.sh@1353 -- # local lvs_uuid=b0f271ab-31d8-4800-a2fb-6ca9d58530db 00:15:52.827 06:54:57 -- common/autotest_common.sh@1354 -- # local lvs_info 00:15:52.827 06:54:57 -- common/autotest_common.sh@1355 -- # local fc 00:15:52.827 06:54:57 -- common/autotest_common.sh@1356 -- # local cs 00:15:52.827 06:54:57 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:53.084 06:54:57 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:15:53.084 { 00:15:53.085 "uuid": "b0f271ab-31d8-4800-a2fb-6ca9d58530db", 00:15:53.085 "name": "lvs_0", 00:15:53.085 "base_bdev": "Nvme0n1", 00:15:53.085 "total_data_clusters": 4, 00:15:53.085 "free_clusters": 4, 00:15:53.085 "block_size": 4096, 00:15:53.085 "cluster_size": 1073741824 00:15:53.085 } 00:15:53.085 ]' 00:15:53.085 06:54:57 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="b0f271ab-31d8-4800-a2fb-6ca9d58530db") .free_clusters' 00:15:53.085 06:54:57 -- common/autotest_common.sh@1358 -- # fc=4 00:15:53.085 06:54:57 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="b0f271ab-31d8-4800-a2fb-6ca9d58530db") .cluster_size' 00:15:53.343 4096 00:15:53.343 06:54:57 -- common/autotest_common.sh@1359 -- # cs=1073741824 00:15:53.343 06:54:57 -- common/autotest_common.sh@1362 -- # free_mb=4096 00:15:53.343 06:54:57 -- common/autotest_common.sh@1363 -- # echo 4096 00:15:53.343 06:54:57 -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:15:53.602 3f7fd520-5fc6-48bf-9843-765e6fffe5d4 00:15:53.602 06:54:57 -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:15:53.861 06:54:58 -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:15:54.120 06:54:58 -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:15:54.120 06:54:58 -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:54.120 06:54:58 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:54.120 06:54:58 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:15:54.120 06:54:58 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:54.120 06:54:58 -- common/autotest_common.sh@1328 -- # local sanitizers 00:15:54.120 06:54:58 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:54.120 06:54:58 -- common/autotest_common.sh@1330 -- # shift 00:15:54.120 06:54:58 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:15:54.120 06:54:58 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:54.120 06:54:58 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:54.120 06:54:58 -- common/autotest_common.sh@1334 -- # grep libasan 00:15:54.120 06:54:58 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:54.378 06:54:58 -- common/autotest_common.sh@1334 -- # asan_lib= 00:15:54.378 06:54:58 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:15:54.378 06:54:58 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:54.378 06:54:58 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:54.378 06:54:58 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:15:54.378 06:54:58 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:54.378 06:54:58 -- common/autotest_common.sh@1334 -- # asan_lib= 00:15:54.378 06:54:58 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:15:54.378 06:54:58 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:54.378 06:54:58 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:54.378 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:54.378 fio-3.35 00:15:54.378 Starting 1 thread 00:15:56.910 00:15:56.910 test: (groupid=0, jobs=1): err= 0: pid=81557: Fri Dec 13 06:55:01 2024 00:15:56.910 read: IOPS=6507, BW=25.4MiB/s (26.7MB/s)(51.1MiB/2009msec) 00:15:56.910 slat (usec): min=2, max=305, avg= 2.75, stdev= 3.56 00:15:56.910 clat (usec): min=2926, max=18093, avg=10253.20, stdev=850.44 00:15:56.910 lat (usec): min=2935, max=18096, avg=10255.95, stdev=850.17 00:15:56.910 clat percentiles (usec): 00:15:56.910 | 1.00th=[ 8455], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[ 9634], 00:15:56.910 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10290], 60.00th=[10421], 00:15:56.910 | 70.00th=[10683], 80.00th=[10945], 90.00th=[11207], 95.00th=[11600], 00:15:56.910 | 99.00th=[12125], 99.50th=[12518], 99.90th=[16057], 99.95th=[16909], 00:15:56.910 | 99.99th=[17171] 00:15:56.910 bw ( KiB/s): min=24896, max=26768, per=99.96%, avg=26018.00, stdev=809.82, samples=4 00:15:56.910 iops : min= 6224, max= 6692, avg=6504.50, stdev=202.45, samples=4 00:15:56.910 write: IOPS=6517, BW=25.5MiB/s (26.7MB/s)(51.1MiB/2009msec); 0 zone resets 00:15:56.910 slat (usec): min=2, max=207, avg= 2.86, stdev= 2.44 00:15:56.910 clat (usec): min=2267, max=18132, avg=9305.84, stdev=801.38 00:15:56.910 lat (usec): min=2279, max=18134, avg=9308.70, stdev=801.26 00:15:56.910 clat percentiles (usec): 00:15:56.910 | 1.00th=[ 7635], 5.00th=[ 8160], 10.00th=[ 8455], 20.00th=[ 8717], 00:15:56.910 | 30.00th=[ 8979], 40.00th=[ 9110], 50.00th=[ 9372], 60.00th=[ 9503], 00:15:56.910 | 70.00th=[ 9634], 80.00th=[ 9896], 90.00th=[10159], 95.00th=[10421], 00:15:56.910 | 99.00th=[11076], 99.50th=[11338], 99.90th=[15795], 99.95th=[16188], 00:15:56.910 | 99.99th=[17957] 00:15:56.910 bw ( KiB/s): min=25728, max=26304, per=99.97%, avg=26062.00, stdev=241.27, samples=4 00:15:56.910 iops : min= 6432, max= 6576, avg=6515.50, stdev=60.32, samples=4 00:15:56.910 lat (msec) : 4=0.06%, 10=60.66%, 20=39.27% 00:15:56.910 cpu : usr=72.71%, sys=21.36%, ctx=10, majf=0, minf=14 00:15:56.910 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:15:56.910 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:56.910 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:56.910 issued rwts: total=13073,13094,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:56.910 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:56.910 00:15:56.910 Run status group 0 (all jobs): 00:15:56.910 READ: bw=25.4MiB/s (26.7MB/s), 25.4MiB/s-25.4MiB/s (26.7MB/s-26.7MB/s), io=51.1MiB (53.5MB), run=2009-2009msec 00:15:56.910 WRITE: bw=25.5MiB/s (26.7MB/s), 25.5MiB/s-25.5MiB/s (26.7MB/s-26.7MB/s), io=51.1MiB (53.6MB), run=2009-2009msec 00:15:56.910 06:55:01 -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:15:56.910 06:55:01 -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:15:57.169 06:55:01 -- host/fio.sh@64 -- # ls_nested_guid=74b4ce38-ba33-4c87-a6a6-bc44ddc7e96d 00:15:57.169 06:55:01 -- host/fio.sh@65 -- # get_lvs_free_mb 74b4ce38-ba33-4c87-a6a6-bc44ddc7e96d 00:15:57.169 06:55:01 -- common/autotest_common.sh@1353 -- # local lvs_uuid=74b4ce38-ba33-4c87-a6a6-bc44ddc7e96d 00:15:57.169 06:55:01 -- common/autotest_common.sh@1354 -- # local lvs_info 00:15:57.169 06:55:01 -- common/autotest_common.sh@1355 -- # local fc 00:15:57.169 06:55:01 -- common/autotest_common.sh@1356 -- # local cs 00:15:57.169 06:55:01 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:57.428 06:55:01 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:15:57.428 { 00:15:57.428 "uuid": "b0f271ab-31d8-4800-a2fb-6ca9d58530db", 00:15:57.428 "name": "lvs_0", 00:15:57.428 "base_bdev": "Nvme0n1", 00:15:57.428 "total_data_clusters": 4, 00:15:57.428 "free_clusters": 0, 00:15:57.428 "block_size": 4096, 00:15:57.428 "cluster_size": 1073741824 00:15:57.428 }, 00:15:57.428 { 00:15:57.428 "uuid": "74b4ce38-ba33-4c87-a6a6-bc44ddc7e96d", 00:15:57.428 "name": "lvs_n_0", 00:15:57.428 "base_bdev": "3f7fd520-5fc6-48bf-9843-765e6fffe5d4", 00:15:57.428 "total_data_clusters": 1022, 00:15:57.428 "free_clusters": 1022, 00:15:57.428 "block_size": 4096, 00:15:57.428 "cluster_size": 4194304 00:15:57.428 } 00:15:57.428 ]' 00:15:57.428 06:55:01 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="74b4ce38-ba33-4c87-a6a6-bc44ddc7e96d") .free_clusters' 00:15:57.428 06:55:01 -- common/autotest_common.sh@1358 -- # fc=1022 00:15:57.428 06:55:01 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="74b4ce38-ba33-4c87-a6a6-bc44ddc7e96d") .cluster_size' 00:15:57.687 06:55:01 -- common/autotest_common.sh@1359 -- # cs=4194304 00:15:57.687 4088 00:15:57.687 06:55:01 -- common/autotest_common.sh@1362 -- # free_mb=4088 00:15:57.687 06:55:01 -- common/autotest_common.sh@1363 -- # echo 4088 00:15:57.687 06:55:01 -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:15:57.945 a04d7308-1231-4429-a684-9d7464eb4452 00:15:57.945 06:55:02 -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:15:58.204 06:55:02 -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:15:58.463 06:55:02 -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:15:58.721 06:55:03 -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:58.721 06:55:03 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:58.721 06:55:03 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:15:58.721 06:55:03 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:58.721 06:55:03 -- common/autotest_common.sh@1328 -- # local sanitizers 00:15:58.721 06:55:03 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:58.721 06:55:03 -- common/autotest_common.sh@1330 -- # shift 00:15:58.721 06:55:03 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:15:58.722 06:55:03 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:58.722 06:55:03 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:58.722 06:55:03 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:58.722 06:55:03 -- common/autotest_common.sh@1334 -- # grep libasan 00:15:58.722 06:55:03 -- common/autotest_common.sh@1334 -- # asan_lib= 00:15:58.722 06:55:03 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:15:58.722 06:55:03 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:58.722 06:55:03 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:58.722 06:55:03 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:15:58.722 06:55:03 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:58.722 06:55:03 -- common/autotest_common.sh@1334 -- # asan_lib= 00:15:58.722 06:55:03 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:15:58.722 06:55:03 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:58.722 06:55:03 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:58.722 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:58.722 fio-3.35 00:15:58.722 Starting 1 thread 00:16:01.256 00:16:01.256 test: (groupid=0, jobs=1): err= 0: pid=81641: Fri Dec 13 06:55:05 2024 00:16:01.256 read: IOPS=5777, BW=22.6MiB/s (23.7MB/s)(45.3MiB/2009msec) 00:16:01.256 slat (usec): min=2, max=241, avg= 2.87, stdev= 3.21 00:16:01.256 clat (usec): min=3017, max=21087, avg=11587.24, stdev=1000.09 00:16:01.256 lat (usec): min=3024, max=21090, avg=11590.11, stdev=999.88 00:16:01.256 clat percentiles (usec): 00:16:01.256 | 1.00th=[ 9372], 5.00th=[10159], 10.00th=[10421], 20.00th=[10814], 00:16:01.256 | 30.00th=[11076], 40.00th=[11338], 50.00th=[11600], 60.00th=[11731], 00:16:01.256 | 70.00th=[11994], 80.00th=[12387], 90.00th=[12780], 95.00th=[13042], 00:16:01.256 | 99.00th=[13698], 99.50th=[14222], 99.90th=[18482], 99.95th=[20055], 00:16:01.256 | 99.99th=[21103] 00:16:01.256 bw ( KiB/s): min=22168, max=23488, per=99.81%, avg=23066.00, stdev=621.75, samples=4 00:16:01.256 iops : min= 5542, max= 5872, avg=5766.50, stdev=155.44, samples=4 00:16:01.256 write: IOPS=5761, BW=22.5MiB/s (23.6MB/s)(45.2MiB/2009msec); 0 zone resets 00:16:01.256 slat (usec): min=2, max=177, avg= 2.95, stdev= 2.36 00:16:01.256 clat (usec): min=1937, max=18776, avg=10497.86, stdev=923.48 00:16:01.256 lat (usec): min=1948, max=18779, avg=10500.82, stdev=923.39 00:16:01.256 clat percentiles (usec): 00:16:01.256 | 1.00th=[ 8586], 5.00th=[ 9110], 10.00th=[ 9503], 20.00th=[ 9765], 00:16:01.256 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10421], 60.00th=[10683], 00:16:01.256 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11600], 95.00th=[11863], 00:16:01.256 | 99.00th=[12518], 99.50th=[12911], 99.90th=[16712], 99.95th=[17695], 00:16:01.256 | 99.99th=[18744] 00:16:01.256 bw ( KiB/s): min=22896, max=23168, per=99.99%, avg=23042.00, stdev=112.07, samples=4 00:16:01.256 iops : min= 5724, max= 5792, avg=5760.50, stdev=28.02, samples=4 00:16:01.256 lat (msec) : 2=0.01%, 4=0.06%, 10=15.40%, 20=84.52%, 50=0.03% 00:16:01.256 cpu : usr=74.00%, sys=19.82%, ctx=7, majf=0, minf=14 00:16:01.256 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:16:01.256 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:01.256 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:01.256 issued rwts: total=11607,11574,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:01.256 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:01.256 00:16:01.256 Run status group 0 (all jobs): 00:16:01.256 READ: bw=22.6MiB/s (23.7MB/s), 22.6MiB/s-22.6MiB/s (23.7MB/s-23.7MB/s), io=45.3MiB (47.5MB), run=2009-2009msec 00:16:01.256 WRITE: bw=22.5MiB/s (23.6MB/s), 22.5MiB/s-22.5MiB/s (23.6MB/s-23.6MB/s), io=45.2MiB (47.4MB), run=2009-2009msec 00:16:01.256 06:55:05 -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:16:01.256 06:55:05 -- host/fio.sh@74 -- # sync 00:16:01.519 06:55:05 -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:16:01.778 06:55:06 -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:16:02.037 06:55:06 -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:16:02.037 06:55:06 -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:16:02.296 06:55:06 -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:16:03.233 06:55:07 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:16:03.233 06:55:07 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:16:03.233 06:55:07 -- host/fio.sh@86 -- # nvmftestfini 00:16:03.233 06:55:07 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:03.233 06:55:07 -- nvmf/common.sh@116 -- # sync 00:16:03.233 06:55:07 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:03.233 06:55:07 -- nvmf/common.sh@119 -- # set +e 00:16:03.233 06:55:07 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:03.233 06:55:07 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:03.233 rmmod nvme_tcp 00:16:03.233 rmmod nvme_fabrics 00:16:03.233 rmmod nvme_keyring 00:16:03.233 06:55:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:03.233 06:55:07 -- nvmf/common.sh@123 -- # set -e 00:16:03.233 06:55:07 -- nvmf/common.sh@124 -- # return 0 00:16:03.233 06:55:07 -- nvmf/common.sh@477 -- # '[' -n 81321 ']' 00:16:03.233 06:55:07 -- nvmf/common.sh@478 -- # killprocess 81321 00:16:03.233 06:55:07 -- common/autotest_common.sh@936 -- # '[' -z 81321 ']' 00:16:03.233 06:55:07 -- common/autotest_common.sh@940 -- # kill -0 81321 00:16:03.233 06:55:07 -- common/autotest_common.sh@941 -- # uname 00:16:03.233 06:55:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:03.233 06:55:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81321 00:16:03.233 killing process with pid 81321 00:16:03.233 06:55:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:03.233 06:55:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:03.233 06:55:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81321' 00:16:03.233 06:55:07 -- common/autotest_common.sh@955 -- # kill 81321 00:16:03.233 06:55:07 -- common/autotest_common.sh@960 -- # wait 81321 00:16:03.492 06:55:07 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:03.492 06:55:07 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:03.492 06:55:07 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:03.492 06:55:07 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:03.492 06:55:07 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:03.492 06:55:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:03.492 06:55:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:03.492 06:55:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:03.492 06:55:07 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:03.492 00:16:03.492 real 0m19.824s 00:16:03.492 user 1m27.719s 00:16:03.492 sys 0m4.236s 00:16:03.492 06:55:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:03.492 06:55:07 -- common/autotest_common.sh@10 -- # set +x 00:16:03.492 ************************************ 00:16:03.492 END TEST nvmf_fio_host 00:16:03.492 ************************************ 00:16:03.492 06:55:07 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:16:03.492 06:55:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:03.492 06:55:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:03.492 06:55:07 -- common/autotest_common.sh@10 -- # set +x 00:16:03.492 ************************************ 00:16:03.492 START TEST nvmf_failover 00:16:03.492 ************************************ 00:16:03.492 06:55:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:16:03.751 * Looking for test storage... 00:16:03.751 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:03.751 06:55:08 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:03.751 06:55:08 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:03.752 06:55:08 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:03.752 06:55:08 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:03.752 06:55:08 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:03.752 06:55:08 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:03.752 06:55:08 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:03.752 06:55:08 -- scripts/common.sh@335 -- # IFS=.-: 00:16:03.752 06:55:08 -- scripts/common.sh@335 -- # read -ra ver1 00:16:03.752 06:55:08 -- scripts/common.sh@336 -- # IFS=.-: 00:16:03.752 06:55:08 -- scripts/common.sh@336 -- # read -ra ver2 00:16:03.752 06:55:08 -- scripts/common.sh@337 -- # local 'op=<' 00:16:03.752 06:55:08 -- scripts/common.sh@339 -- # ver1_l=2 00:16:03.752 06:55:08 -- scripts/common.sh@340 -- # ver2_l=1 00:16:03.752 06:55:08 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:03.752 06:55:08 -- scripts/common.sh@343 -- # case "$op" in 00:16:03.752 06:55:08 -- scripts/common.sh@344 -- # : 1 00:16:03.752 06:55:08 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:03.752 06:55:08 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:03.752 06:55:08 -- scripts/common.sh@364 -- # decimal 1 00:16:03.752 06:55:08 -- scripts/common.sh@352 -- # local d=1 00:16:03.752 06:55:08 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:03.752 06:55:08 -- scripts/common.sh@354 -- # echo 1 00:16:03.752 06:55:08 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:03.752 06:55:08 -- scripts/common.sh@365 -- # decimal 2 00:16:03.752 06:55:08 -- scripts/common.sh@352 -- # local d=2 00:16:03.752 06:55:08 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:03.752 06:55:08 -- scripts/common.sh@354 -- # echo 2 00:16:03.752 06:55:08 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:03.752 06:55:08 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:03.752 06:55:08 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:03.752 06:55:08 -- scripts/common.sh@367 -- # return 0 00:16:03.752 06:55:08 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:03.752 06:55:08 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:03.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:03.752 --rc genhtml_branch_coverage=1 00:16:03.752 --rc genhtml_function_coverage=1 00:16:03.752 --rc genhtml_legend=1 00:16:03.752 --rc geninfo_all_blocks=1 00:16:03.752 --rc geninfo_unexecuted_blocks=1 00:16:03.752 00:16:03.752 ' 00:16:03.752 06:55:08 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:03.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:03.752 --rc genhtml_branch_coverage=1 00:16:03.752 --rc genhtml_function_coverage=1 00:16:03.752 --rc genhtml_legend=1 00:16:03.752 --rc geninfo_all_blocks=1 00:16:03.752 --rc geninfo_unexecuted_blocks=1 00:16:03.752 00:16:03.752 ' 00:16:03.752 06:55:08 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:03.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:03.752 --rc genhtml_branch_coverage=1 00:16:03.752 --rc genhtml_function_coverage=1 00:16:03.752 --rc genhtml_legend=1 00:16:03.752 --rc geninfo_all_blocks=1 00:16:03.752 --rc geninfo_unexecuted_blocks=1 00:16:03.752 00:16:03.752 ' 00:16:03.752 06:55:08 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:03.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:03.752 --rc genhtml_branch_coverage=1 00:16:03.752 --rc genhtml_function_coverage=1 00:16:03.752 --rc genhtml_legend=1 00:16:03.752 --rc geninfo_all_blocks=1 00:16:03.752 --rc geninfo_unexecuted_blocks=1 00:16:03.752 00:16:03.752 ' 00:16:03.752 06:55:08 -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:03.752 06:55:08 -- nvmf/common.sh@7 -- # uname -s 00:16:03.752 06:55:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:03.752 06:55:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:03.752 06:55:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:03.752 06:55:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:03.752 06:55:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:03.752 06:55:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:03.752 06:55:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:03.752 06:55:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:03.752 06:55:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:03.752 06:55:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:03.752 06:55:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:657f0c9c-3891-4064-9841-3d87a573b6e7 00:16:03.752 06:55:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=657f0c9c-3891-4064-9841-3d87a573b6e7 00:16:03.752 06:55:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:03.752 06:55:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:03.752 06:55:08 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:03.752 06:55:08 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:03.752 06:55:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:03.752 06:55:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:03.752 06:55:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:03.752 06:55:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.752 06:55:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.752 06:55:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.752 06:55:08 -- paths/export.sh@5 -- # export PATH 00:16:03.752 06:55:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.752 06:55:08 -- nvmf/common.sh@46 -- # : 0 00:16:03.752 06:55:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:03.752 06:55:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:03.752 06:55:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:03.752 06:55:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:03.752 06:55:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:03.752 06:55:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:03.752 06:55:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:03.752 06:55:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:03.752 06:55:08 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:03.752 06:55:08 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:03.752 06:55:08 -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:03.752 06:55:08 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:03.752 06:55:08 -- host/failover.sh@18 -- # nvmftestinit 00:16:03.752 06:55:08 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:03.752 06:55:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:03.752 06:55:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:03.752 06:55:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:03.752 06:55:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:03.752 06:55:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:03.752 06:55:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:03.752 06:55:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:03.752 06:55:08 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:03.752 06:55:08 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:03.752 06:55:08 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:03.752 06:55:08 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:03.752 06:55:08 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:03.752 06:55:08 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:03.752 06:55:08 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:03.752 06:55:08 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:03.752 06:55:08 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:03.752 06:55:08 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:03.752 06:55:08 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:03.752 06:55:08 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:03.752 06:55:08 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:03.752 06:55:08 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:03.752 06:55:08 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:03.752 06:55:08 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:03.752 06:55:08 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:03.752 06:55:08 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:03.752 06:55:08 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:03.752 06:55:08 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:03.752 Cannot find device "nvmf_tgt_br" 00:16:03.752 06:55:08 -- nvmf/common.sh@154 -- # true 00:16:03.752 06:55:08 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:03.752 Cannot find device "nvmf_tgt_br2" 00:16:03.752 06:55:08 -- nvmf/common.sh@155 -- # true 00:16:03.752 06:55:08 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:03.752 06:55:08 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:03.752 Cannot find device "nvmf_tgt_br" 00:16:03.753 06:55:08 -- nvmf/common.sh@157 -- # true 00:16:03.753 06:55:08 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:03.753 Cannot find device "nvmf_tgt_br2" 00:16:03.753 06:55:08 -- nvmf/common.sh@158 -- # true 00:16:03.753 06:55:08 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:04.012 06:55:08 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:04.012 06:55:08 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:04.012 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:04.012 06:55:08 -- nvmf/common.sh@161 -- # true 00:16:04.012 06:55:08 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:04.012 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:04.012 06:55:08 -- nvmf/common.sh@162 -- # true 00:16:04.012 06:55:08 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:04.012 06:55:08 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:04.012 06:55:08 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:04.012 06:55:08 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:04.012 06:55:08 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:04.012 06:55:08 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:04.012 06:55:08 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:04.012 06:55:08 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:04.012 06:55:08 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:04.012 06:55:08 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:04.012 06:55:08 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:04.012 06:55:08 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:04.012 06:55:08 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:04.012 06:55:08 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:04.012 06:55:08 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:04.012 06:55:08 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:04.012 06:55:08 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:04.012 06:55:08 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:04.012 06:55:08 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:04.012 06:55:08 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:04.012 06:55:08 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:04.012 06:55:08 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:04.012 06:55:08 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:04.012 06:55:08 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:04.012 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:04.012 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:16:04.012 00:16:04.012 --- 10.0.0.2 ping statistics --- 00:16:04.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:04.012 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:16:04.012 06:55:08 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:04.012 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:04.012 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.031 ms 00:16:04.012 00:16:04.012 --- 10.0.0.3 ping statistics --- 00:16:04.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:04.012 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:16:04.012 06:55:08 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:04.012 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:04.012 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:16:04.012 00:16:04.012 --- 10.0.0.1 ping statistics --- 00:16:04.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:04.012 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:16:04.012 06:55:08 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:04.012 06:55:08 -- nvmf/common.sh@421 -- # return 0 00:16:04.012 06:55:08 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:04.012 06:55:08 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:04.012 06:55:08 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:04.012 06:55:08 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:04.012 06:55:08 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:04.012 06:55:08 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:04.012 06:55:08 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:04.012 06:55:08 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:16:04.012 06:55:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:04.012 06:55:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:04.012 06:55:08 -- common/autotest_common.sh@10 -- # set +x 00:16:04.272 06:55:08 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:04.272 06:55:08 -- nvmf/common.sh@469 -- # nvmfpid=81884 00:16:04.272 06:55:08 -- nvmf/common.sh@470 -- # waitforlisten 81884 00:16:04.272 06:55:08 -- common/autotest_common.sh@829 -- # '[' -z 81884 ']' 00:16:04.272 06:55:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:04.272 06:55:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:04.272 06:55:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:04.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:04.272 06:55:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:04.272 06:55:08 -- common/autotest_common.sh@10 -- # set +x 00:16:04.272 [2024-12-13 06:55:08.598107] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:04.272 [2024-12-13 06:55:08.598187] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:04.272 [2024-12-13 06:55:08.731239] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:04.272 [2024-12-13 06:55:08.764726] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:04.272 [2024-12-13 06:55:08.764899] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:04.272 [2024-12-13 06:55:08.764912] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:04.272 [2024-12-13 06:55:08.764921] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:04.272 [2024-12-13 06:55:08.765582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:04.272 [2024-12-13 06:55:08.765654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:04.272 [2024-12-13 06:55:08.765661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:05.209 06:55:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:05.209 06:55:09 -- common/autotest_common.sh@862 -- # return 0 00:16:05.209 06:55:09 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:05.209 06:55:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:05.209 06:55:09 -- common/autotest_common.sh@10 -- # set +x 00:16:05.209 06:55:09 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:05.209 06:55:09 -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:05.468 [2024-12-13 06:55:09.848721] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:05.468 06:55:09 -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:16:05.727 Malloc0 00:16:05.727 06:55:10 -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:05.985 06:55:10 -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:06.244 06:55:10 -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:06.502 [2024-12-13 06:55:10.960472] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:06.502 06:55:10 -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:06.761 [2024-12-13 06:55:11.192685] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:06.761 06:55:11 -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:16:07.019 [2024-12-13 06:55:11.420940] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:16:07.019 06:55:11 -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:16:07.019 06:55:11 -- host/failover.sh@31 -- # bdevperf_pid=81944 00:16:07.019 06:55:11 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:07.019 06:55:11 -- host/failover.sh@34 -- # waitforlisten 81944 /var/tmp/bdevperf.sock 00:16:07.019 06:55:11 -- common/autotest_common.sh@829 -- # '[' -z 81944 ']' 00:16:07.019 06:55:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:07.019 06:55:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:07.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:07.019 06:55:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:07.019 06:55:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:07.019 06:55:11 -- common/autotest_common.sh@10 -- # set +x 00:16:07.955 06:55:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:07.955 06:55:12 -- common/autotest_common.sh@862 -- # return 0 00:16:07.955 06:55:12 -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:08.214 NVMe0n1 00:16:08.473 06:55:12 -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:08.732 00:16:08.732 06:55:13 -- host/failover.sh@39 -- # run_test_pid=81973 00:16:08.732 06:55:13 -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:08.732 06:55:13 -- host/failover.sh@41 -- # sleep 1 00:16:09.668 06:55:14 -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:09.927 [2024-12-13 06:55:14.365822] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145d240 is same with the state(5) to be set 00:16:09.927 [2024-12-13 06:55:14.365893] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145d240 is same with the state(5) to be set 00:16:09.927 [2024-12-13 06:55:14.365922] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145d240 is same with the state(5) to be set 00:16:09.927 [2024-12-13 06:55:14.365930] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145d240 is same with the state(5) to be set 00:16:09.927 [2024-12-13 06:55:14.365938] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145d240 is same with the state(5) to be set 00:16:09.927 [2024-12-13 06:55:14.365946] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145d240 is same with the state(5) to be set 00:16:09.927 [2024-12-13 06:55:14.365955] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145d240 is same with the state(5) to be set 00:16:09.927 [2024-12-13 06:55:14.365963] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145d240 is same with the state(5) to be set 00:16:09.927 [2024-12-13 06:55:14.365971] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145d240 is same with the state(5) to be set 00:16:09.927 [2024-12-13 06:55:14.365979] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145d240 is same with the state(5) to be set 00:16:09.927 [2024-12-13 06:55:14.365988] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145d240 is same with the state(5) to be set 00:16:09.927 [2024-12-13 06:55:14.365996] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145d240 is same with the state(5) to be set 00:16:09.927 [2024-12-13 06:55:14.366004] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145d240 is same with the state(5) to be set 00:16:09.927 [2024-12-13 06:55:14.366011] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145d240 is same with the state(5) to be set 00:16:09.927 06:55:14 -- host/failover.sh@45 -- # sleep 3 00:16:13.218 06:55:17 -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:13.218 00:16:13.218 06:55:17 -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:13.786 [2024-12-13 06:55:17.997711] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145de50 is same with the state(5) to be set 00:16:13.786 [2024-12-13 06:55:17.997815] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145de50 is same with the state(5) to be set 00:16:13.786 [2024-12-13 06:55:17.997846] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145de50 is same with the state(5) to be set 00:16:13.786 [2024-12-13 06:55:17.997855] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145de50 is same with the state(5) to be set 00:16:13.786 [2024-12-13 06:55:17.997864] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145de50 is same with the state(5) to be set 00:16:13.786 [2024-12-13 06:55:17.997872] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145de50 is same with the state(5) to be set 00:16:13.786 [2024-12-13 06:55:17.997881] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145de50 is same with the state(5) to be set 00:16:13.786 [2024-12-13 06:55:17.997890] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145de50 is same with the state(5) to be set 00:16:13.786 [2024-12-13 06:55:17.997899] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145de50 is same with the state(5) to be set 00:16:13.786 [2024-12-13 06:55:17.997909] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145de50 is same with the state(5) to be set 00:16:13.786 [2024-12-13 06:55:17.997918] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145de50 is same with the state(5) to be set 00:16:13.786 [2024-12-13 06:55:17.997926] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145de50 is same with the state(5) to be set 00:16:13.786 [2024-12-13 06:55:17.997946] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145de50 is same with the state(5) to be set 00:16:13.786 [2024-12-13 06:55:17.997954] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145de50 is same with the state(5) to be set 00:16:13.786 [2024-12-13 06:55:17.997963] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145de50 is same with the state(5) to be set 00:16:13.786 06:55:18 -- host/failover.sh@50 -- # sleep 3 00:16:17.073 06:55:21 -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:17.073 [2024-12-13 06:55:21.286868] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:17.073 06:55:21 -- host/failover.sh@55 -- # sleep 1 00:16:18.009 06:55:22 -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:16:18.268 [2024-12-13 06:55:22.565294] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601550 is same with the state(5) to be set 00:16:18.268 [2024-12-13 06:55:22.565393] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601550 is same with the state(5) to be set 00:16:18.268 [2024-12-13 06:55:22.565424] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601550 is same with the state(5) to be set 00:16:18.268 [2024-12-13 06:55:22.565433] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601550 is same with the state(5) to be set 00:16:18.268 [2024-12-13 06:55:22.565456] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601550 is same with the state(5) to be set 00:16:18.268 [2024-12-13 06:55:22.565465] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601550 is same with the state(5) to be set 00:16:18.268 [2024-12-13 06:55:22.565473] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601550 is same with the state(5) to be set 00:16:18.268 [2024-12-13 06:55:22.565481] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601550 is same with the state(5) to be set 00:16:18.268 [2024-12-13 06:55:22.565489] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601550 is same with the state(5) to be set 00:16:18.268 [2024-12-13 06:55:22.565497] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601550 is same with the state(5) to be set 00:16:18.268 [2024-12-13 06:55:22.565505] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601550 is same with the state(5) to be set 00:16:18.268 [2024-12-13 06:55:22.565513] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601550 is same with the state(5) to be set 00:16:18.268 [2024-12-13 06:55:22.565521] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601550 is same with the state(5) to be set 00:16:18.268 [2024-12-13 06:55:22.565529] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601550 is same with the state(5) to be set 00:16:18.268 [2024-12-13 06:55:22.565537] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601550 is same with the state(5) to be set 00:16:18.268 [2024-12-13 06:55:22.565544] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601550 is same with the state(5) to be set 00:16:18.268 [2024-12-13 06:55:22.565552] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601550 is same with the state(5) to be set 00:16:18.268 [2024-12-13 06:55:22.565560] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601550 is same with the state(5) to be set 00:16:18.268 [2024-12-13 06:55:22.565568] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601550 is same with the state(5) to be set 00:16:18.268 [2024-12-13 06:55:22.565577] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601550 is same with the state(5) to be set 00:16:18.268 [2024-12-13 06:55:22.565585] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601550 is same with the state(5) to be set 00:16:18.268 [2024-12-13 06:55:22.565593] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601550 is same with the state(5) to be set 00:16:18.268 [2024-12-13 06:55:22.565601] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601550 is same with the state(5) to be set 00:16:18.268 [2024-12-13 06:55:22.565609] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601550 is same with the state(5) to be set 00:16:18.268 [2024-12-13 06:55:22.565617] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601550 is same with the state(5) to be set 00:16:18.268 [2024-12-13 06:55:22.565624] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601550 is same with the state(5) to be set 00:16:18.268 [2024-12-13 06:55:22.565632] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601550 is same with the state(5) to be set 00:16:18.268 [2024-12-13 06:55:22.565640] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601550 is same with the state(5) to be set 00:16:18.268 [2024-12-13 06:55:22.565648] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601550 is same with the state(5) to be set 00:16:18.269 [2024-12-13 06:55:22.565655] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601550 is same with the state(5) to be set 00:16:18.269 [2024-12-13 06:55:22.565679] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601550 is same with the state(5) to be set 00:16:18.269 06:55:22 -- host/failover.sh@59 -- # wait 81973 00:16:24.848 0 00:16:24.848 06:55:28 -- host/failover.sh@61 -- # killprocess 81944 00:16:24.848 06:55:28 -- common/autotest_common.sh@936 -- # '[' -z 81944 ']' 00:16:24.848 06:55:28 -- common/autotest_common.sh@940 -- # kill -0 81944 00:16:24.848 06:55:28 -- common/autotest_common.sh@941 -- # uname 00:16:24.848 06:55:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:24.848 06:55:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81944 00:16:24.848 killing process with pid 81944 00:16:24.848 06:55:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:24.848 06:55:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:24.848 06:55:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81944' 00:16:24.848 06:55:28 -- common/autotest_common.sh@955 -- # kill 81944 00:16:24.848 06:55:28 -- common/autotest_common.sh@960 -- # wait 81944 00:16:24.848 06:55:28 -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:24.848 [2024-12-13 06:55:11.475096] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:24.848 [2024-12-13 06:55:11.475200] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81944 ] 00:16:24.848 [2024-12-13 06:55:11.602821] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:24.848 [2024-12-13 06:55:11.637643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.848 Running I/O for 15 seconds... 00:16:24.848 [2024-12-13 06:55:14.366067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:121840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.848 [2024-12-13 06:55:14.366119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.848 [2024-12-13 06:55:14.366146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:121864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.848 [2024-12-13 06:55:14.366162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.848 [2024-12-13 06:55:14.366211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:121872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.848 [2024-12-13 06:55:14.366225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.848 [2024-12-13 06:55:14.366242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:121184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.848 [2024-12-13 06:55:14.366256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.848 [2024-12-13 06:55:14.366272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:121200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.848 [2024-12-13 06:55:14.366287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.848 [2024-12-13 06:55:14.366303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:121216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.848 [2024-12-13 06:55:14.366317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.848 [2024-12-13 06:55:14.366333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:121224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.848 [2024-12-13 06:55:14.366347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.848 [2024-12-13 06:55:14.366364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:121256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.848 [2024-12-13 06:55:14.366378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.848 [2024-12-13 06:55:14.366394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:121272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.848 [2024-12-13 06:55:14.366422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.848 [2024-12-13 06:55:14.366444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:121280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.848 [2024-12-13 06:55:14.366458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.848 [2024-12-13 06:55:14.366476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:121288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.848 [2024-12-13 06:55:14.366490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.848 [2024-12-13 06:55:14.366529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:121896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.848 [2024-12-13 06:55:14.366545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.848 [2024-12-13 06:55:14.366562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:121920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.848 [2024-12-13 06:55:14.366576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.848 [2024-12-13 06:55:14.366592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:121928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.848 [2024-12-13 06:55:14.366607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.848 [2024-12-13 06:55:14.366624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:121936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.848 [2024-12-13 06:55:14.366638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.848 [2024-12-13 06:55:14.366654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:121952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.848 [2024-12-13 06:55:14.366668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.848 [2024-12-13 06:55:14.366684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:121960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.848 [2024-12-13 06:55:14.366728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.848 [2024-12-13 06:55:14.366761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:121968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.848 [2024-12-13 06:55:14.366779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.848 [2024-12-13 06:55:14.366799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:121976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.848 [2024-12-13 06:55:14.366814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.848 [2024-12-13 06:55:14.366829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:121984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.848 [2024-12-13 06:55:14.366842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.848 [2024-12-13 06:55:14.366872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:121992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.848 [2024-12-13 06:55:14.366902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.848 [2024-12-13 06:55:14.366917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:122000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.848 [2024-12-13 06:55:14.366930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.848 [2024-12-13 06:55:14.366945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:122008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.848 [2024-12-13 06:55:14.366958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.848 [2024-12-13 06:55:14.366973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:121312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.848 [2024-12-13 06:55:14.366994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.848 [2024-12-13 06:55:14.367010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:121328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.848 [2024-12-13 06:55:14.367024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.848 [2024-12-13 06:55:14.367039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:121368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.849 [2024-12-13 06:55:14.367052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.849 [2024-12-13 06:55:14.367067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:121376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.849 [2024-12-13 06:55:14.367080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.849 [2024-12-13 06:55:14.367095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:121408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.849 [2024-12-13 06:55:14.367125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.849 [2024-12-13 06:55:14.367141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:121416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.849 [2024-12-13 06:55:14.367154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.849 [2024-12-13 06:55:14.367169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:121440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.849 [2024-12-13 06:55:14.367183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.849 [2024-12-13 06:55:14.367198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:121448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.849 [2024-12-13 06:55:14.367212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.849 [2024-12-13 06:55:14.367228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:122016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.849 [2024-12-13 06:55:14.367241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.849 [2024-12-13 06:55:14.367257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:122024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.849 [2024-12-13 06:55:14.367271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.849 [2024-12-13 06:55:14.367287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:122032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.849 [2024-12-13 06:55:14.367301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.849 [2024-12-13 06:55:14.367316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:122040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.849 [2024-12-13 06:55:14.367330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.849 [2024-12-13 06:55:14.367345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:122048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.849 [2024-12-13 06:55:14.367359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.849 [2024-12-13 06:55:14.367381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:122056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.849 [2024-12-13 06:55:14.367407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.849 [2024-12-13 06:55:14.367426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:122064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.849 [2024-12-13 06:55:14.367440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.849 [2024-12-13 06:55:14.367457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:122072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.849 [2024-12-13 06:55:14.367471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.849 [2024-12-13 06:55:14.367501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:122080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.849 [2024-12-13 06:55:14.367515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.849 [2024-12-13 06:55:14.367530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:122088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.849 [2024-12-13 06:55:14.367544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.849 [2024-12-13 06:55:14.367559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:122096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.849 [2024-12-13 06:55:14.367572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.849 [2024-12-13 06:55:14.367587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:122104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.849 [2024-12-13 06:55:14.367600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.849 [2024-12-13 06:55:14.367615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:122112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.849 [2024-12-13 06:55:14.367628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.849 [2024-12-13 06:55:14.367643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:122120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.849 [2024-12-13 06:55:14.367656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.849 [2024-12-13 06:55:14.367671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:122128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.849 [2024-12-13 06:55:14.367684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.849 [2024-12-13 06:55:14.367699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:122136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.849 [2024-12-13 06:55:14.367712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.849 [2024-12-13 06:55:14.367727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:122144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.849 [2024-12-13 06:55:14.367741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.849 [2024-12-13 06:55:14.367755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:122152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.849 [2024-12-13 06:55:14.367776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.849 [2024-12-13 06:55:14.367792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:121456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.849 [2024-12-13 06:55:14.367805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.849 [2024-12-13 06:55:14.367820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:121488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.849 [2024-12-13 06:55:14.367833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.849 [2024-12-13 06:55:14.367848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:121512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.849 [2024-12-13 06:55:14.367890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.849 [2024-12-13 06:55:14.367908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:121528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.849 [2024-12-13 06:55:14.367923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.849 [2024-12-13 06:55:14.367939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:121536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.849 [2024-12-13 06:55:14.367953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.849 [2024-12-13 06:55:14.367970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:121544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.849 [2024-12-13 06:55:14.367984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.849 [2024-12-13 06:55:14.368000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:121576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.849 [2024-12-13 06:55:14.368015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.849 [2024-12-13 06:55:14.368031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:121584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.849 [2024-12-13 06:55:14.368045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.849 [2024-12-13 06:55:14.368061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:122160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.849 [2024-12-13 06:55:14.368075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.849 [2024-12-13 06:55:14.368091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:122168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.849 [2024-12-13 06:55:14.368106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.849 [2024-12-13 06:55:14.368122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:122176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.849 [2024-12-13 06:55:14.368136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.849 [2024-12-13 06:55:14.368151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:122184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.849 [2024-12-13 06:55:14.368166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.849 [2024-12-13 06:55:14.368182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:122192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.849 [2024-12-13 06:55:14.368203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.849 [2024-12-13 06:55:14.368220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:122200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.849 [2024-12-13 06:55:14.368234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.849 [2024-12-13 06:55:14.368250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:122208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.849 [2024-12-13 06:55:14.368264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.850 [2024-12-13 06:55:14.368280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:122216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.850 [2024-12-13 06:55:14.368295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.850 [2024-12-13 06:55:14.368311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:122224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.850 [2024-12-13 06:55:14.368325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.850 [2024-12-13 06:55:14.368341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:122232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.850 [2024-12-13 06:55:14.368355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.850 [2024-12-13 06:55:14.368384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:122240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.850 [2024-12-13 06:55:14.368401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.850 [2024-12-13 06:55:14.368417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:122248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.850 [2024-12-13 06:55:14.368431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.850 [2024-12-13 06:55:14.368448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:122256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.850 [2024-12-13 06:55:14.368462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.850 [2024-12-13 06:55:14.368478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:122264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.850 [2024-12-13 06:55:14.368493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.850 [2024-12-13 06:55:14.368512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:122272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.850 [2024-12-13 06:55:14.368527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.850 [2024-12-13 06:55:14.368544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:122280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.850 [2024-12-13 06:55:14.368558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.850 [2024-12-13 06:55:14.368574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:122288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.850 [2024-12-13 06:55:14.368588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.850 [2024-12-13 06:55:14.368612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:122296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.850 [2024-12-13 06:55:14.368627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.850 [2024-12-13 06:55:14.368643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:121600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.850 [2024-12-13 06:55:14.368657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.850 [2024-12-13 06:55:14.368673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:121616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.850 [2024-12-13 06:55:14.368687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.850 [2024-12-13 06:55:14.368703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:121632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.850 [2024-12-13 06:55:14.368717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.850 [2024-12-13 06:55:14.368733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:121640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.850 [2024-12-13 06:55:14.368748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.850 [2024-12-13 06:55:14.368764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:121648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.850 [2024-12-13 06:55:14.368793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.850 [2024-12-13 06:55:14.368808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:121656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.850 [2024-12-13 06:55:14.368822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.850 [2024-12-13 06:55:14.368837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:121664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.850 [2024-12-13 06:55:14.368851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.850 [2024-12-13 06:55:14.368867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:121680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.850 [2024-12-13 06:55:14.368881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.850 [2024-12-13 06:55:14.368896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:122304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.850 [2024-12-13 06:55:14.368910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.850 [2024-12-13 06:55:14.368925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:122312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.850 [2024-12-13 06:55:14.368939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.850 [2024-12-13 06:55:14.368954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:122320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.850 [2024-12-13 06:55:14.368968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.850 [2024-12-13 06:55:14.368984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:122328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.850 [2024-12-13 06:55:14.369004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.850 [2024-12-13 06:55:14.369022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:122336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.850 [2024-12-13 06:55:14.369036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.850 [2024-12-13 06:55:14.369052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:122344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.850 [2024-12-13 06:55:14.369066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.850 [2024-12-13 06:55:14.369082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:122352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.850 [2024-12-13 06:55:14.369096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.850 [2024-12-13 06:55:14.369111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:122360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.850 [2024-12-13 06:55:14.369142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.850 [2024-12-13 06:55:14.369158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:122368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.850 [2024-12-13 06:55:14.369172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.850 [2024-12-13 06:55:14.369188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:122376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.850 [2024-12-13 06:55:14.369202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.850 [2024-12-13 06:55:14.369218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:122384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.850 [2024-12-13 06:55:14.369233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.850 [2024-12-13 06:55:14.369255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:121688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.850 [2024-12-13 06:55:14.369269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.850 [2024-12-13 06:55:14.369285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.850 [2024-12-13 06:55:14.369300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.850 [2024-12-13 06:55:14.369316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:121712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.850 [2024-12-13 06:55:14.369330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.850 [2024-12-13 06:55:14.369346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:121720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.850 [2024-12-13 06:55:14.369361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.850 [2024-12-13 06:55:14.369389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:121784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.850 [2024-12-13 06:55:14.369407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.850 [2024-12-13 06:55:14.369432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:121792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.850 [2024-12-13 06:55:14.369462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.850 [2024-12-13 06:55:14.369477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:121800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.850 [2024-12-13 06:55:14.369491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.850 [2024-12-13 06:55:14.369506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:121808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.850 [2024-12-13 06:55:14.369520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.850 [2024-12-13 06:55:14.369536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:122392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.851 [2024-12-13 06:55:14.369549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.851 [2024-12-13 06:55:14.369567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:122400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.851 [2024-12-13 06:55:14.369581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.851 [2024-12-13 06:55:14.369597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:122408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.851 [2024-12-13 06:55:14.369611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.851 [2024-12-13 06:55:14.369627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:122416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.851 [2024-12-13 06:55:14.369640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.851 [2024-12-13 06:55:14.369656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:122424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.851 [2024-12-13 06:55:14.369670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.851 [2024-12-13 06:55:14.369685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:122432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.851 [2024-12-13 06:55:14.369699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.851 [2024-12-13 06:55:14.369714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:122440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.851 [2024-12-13 06:55:14.369728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.851 [2024-12-13 06:55:14.369744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:122448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.851 [2024-12-13 06:55:14.369757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.851 [2024-12-13 06:55:14.369773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:122456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.851 [2024-12-13 06:55:14.369786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.851 [2024-12-13 06:55:14.369802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:122464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.851 [2024-12-13 06:55:14.369822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.851 [2024-12-13 06:55:14.369839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:122472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.851 [2024-12-13 06:55:14.369853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.851 [2024-12-13 06:55:14.369868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:122480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.851 [2024-12-13 06:55:14.369882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.851 [2024-12-13 06:55:14.369898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:122488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.851 [2024-12-13 06:55:14.369912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.851 [2024-12-13 06:55:14.369927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:122496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.851 [2024-12-13 06:55:14.369941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.851 [2024-12-13 06:55:14.369956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:122504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.851 [2024-12-13 06:55:14.369970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.851 [2024-12-13 06:55:14.369986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:122512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.851 [2024-12-13 06:55:14.370000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.851 [2024-12-13 06:55:14.370015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:122520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.851 [2024-12-13 06:55:14.370029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.851 [2024-12-13 06:55:14.370046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:122528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.851 [2024-12-13 06:55:14.370061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.851 [2024-12-13 06:55:14.370076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:121832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.851 [2024-12-13 06:55:14.370090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.851 [2024-12-13 06:55:14.370106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:121848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.851 [2024-12-13 06:55:14.370120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.851 [2024-12-13 06:55:14.370135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:121856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.851 [2024-12-13 06:55:14.370149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.851 [2024-12-13 06:55:14.370165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:121880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.851 [2024-12-13 06:55:14.370178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.851 [2024-12-13 06:55:14.370201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:121888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.851 [2024-12-13 06:55:14.370215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.851 [2024-12-13 06:55:14.370230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:121904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.851 [2024-12-13 06:55:14.370244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.851 [2024-12-13 06:55:14.370259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:121912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.851 [2024-12-13 06:55:14.370273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.851 [2024-12-13 06:55:14.370288] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17007e0 is same with the state(5) to be set 00:16:24.851 [2024-12-13 06:55:14.370305] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:24.851 [2024-12-13 06:55:14.370318] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:24.851 [2024-12-13 06:55:14.370329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121944 len:8 PRP1 0x0 PRP2 0x0 00:16:24.851 [2024-12-13 06:55:14.370343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.851 [2024-12-13 06:55:14.370400] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x17007e0 was disconnected and freed. reset controller. 00:16:24.851 [2024-12-13 06:55:14.370422] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:16:24.851 [2024-12-13 06:55:14.370476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:24.851 [2024-12-13 06:55:14.370497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.851 [2024-12-13 06:55:14.370513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:24.851 [2024-12-13 06:55:14.370526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.851 [2024-12-13 06:55:14.370540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:24.851 [2024-12-13 06:55:14.370554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.851 [2024-12-13 06:55:14.370567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:24.851 [2024-12-13 06:55:14.370581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.851 [2024-12-13 06:55:14.370595] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:24.851 [2024-12-13 06:55:14.370642] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1703820 (9): Bad file descriptor 00:16:24.851 [2024-12-13 06:55:14.373177] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:24.851 [2024-12-13 06:55:14.402806] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:24.851 [2024-12-13 06:55:17.998025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:125552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.851 [2024-12-13 06:55:17.998080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.851 [2024-12-13 06:55:17.998130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:125560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.851 [2024-12-13 06:55:17.998148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.851 [2024-12-13 06:55:17.998166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:125576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.851 [2024-12-13 06:55:17.998180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.851 [2024-12-13 06:55:17.998196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:124880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.851 [2024-12-13 06:55:17.998210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.851 [2024-12-13 06:55:17.998226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:124888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.851 [2024-12-13 06:55:17.998240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.851 [2024-12-13 06:55:17.998256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:124896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.852 [2024-12-13 06:55:17.998270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.852 [2024-12-13 06:55:17.998286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:124904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.852 [2024-12-13 06:55:17.998300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.852 [2024-12-13 06:55:17.998316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:124928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.852 [2024-12-13 06:55:17.998331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.852 [2024-12-13 06:55:17.998347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:124936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.852 [2024-12-13 06:55:17.998360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.852 [2024-12-13 06:55:17.998376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:124944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.852 [2024-12-13 06:55:17.998390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.852 [2024-12-13 06:55:17.998406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:124952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.852 [2024-12-13 06:55:17.998436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.852 [2024-12-13 06:55:17.998453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:125592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.852 [2024-12-13 06:55:17.998467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.852 [2024-12-13 06:55:17.998483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:125616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.852 [2024-12-13 06:55:17.998498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.852 [2024-12-13 06:55:17.998514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:125624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.852 [2024-12-13 06:55:17.998537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.852 [2024-12-13 06:55:17.998555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:125640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.852 [2024-12-13 06:55:17.998569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.852 [2024-12-13 06:55:17.998585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:125656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.852 [2024-12-13 06:55:17.998599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.852 [2024-12-13 06:55:17.998615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:125672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.852 [2024-12-13 06:55:17.998632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.852 [2024-12-13 06:55:17.998648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:125680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.852 [2024-12-13 06:55:17.998662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.852 [2024-12-13 06:55:17.998678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:125688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.852 [2024-12-13 06:55:17.998692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.852 [2024-12-13 06:55:17.998708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:125696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.852 [2024-12-13 06:55:17.998722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.852 [2024-12-13 06:55:17.998738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:125000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.852 [2024-12-13 06:55:17.998752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.852 [2024-12-13 06:55:17.998767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:125008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.852 [2024-12-13 06:55:17.998781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.852 [2024-12-13 06:55:17.998797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:125016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.852 [2024-12-13 06:55:17.998811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.852 [2024-12-13 06:55:17.998827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:125040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.852 [2024-12-13 06:55:17.998841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.852 [2024-12-13 06:55:17.998857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:125056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.852 [2024-12-13 06:55:17.998871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.852 [2024-12-13 06:55:17.998887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:125072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.852 [2024-12-13 06:55:17.998915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.852 [2024-12-13 06:55:17.998930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:125136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.852 [2024-12-13 06:55:17.998951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.852 [2024-12-13 06:55:17.998967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:125184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.852 [2024-12-13 06:55:17.998980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.852 [2024-12-13 06:55:17.999010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:125704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.852 [2024-12-13 06:55:17.999023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.852 [2024-12-13 06:55:17.999038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:125712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.852 [2024-12-13 06:55:17.999051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.852 [2024-12-13 06:55:17.999066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:125720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.852 [2024-12-13 06:55:17.999079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.852 [2024-12-13 06:55:17.999094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:125728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.852 [2024-12-13 06:55:17.999107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.852 [2024-12-13 06:55:17.999122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:125736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.852 [2024-12-13 06:55:17.999136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.852 [2024-12-13 06:55:17.999151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:125744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.852 [2024-12-13 06:55:17.999164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.852 [2024-12-13 06:55:17.999179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:125752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.852 [2024-12-13 06:55:17.999192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.852 [2024-12-13 06:55:17.999207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:125760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.852 [2024-12-13 06:55:17.999220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.853 [2024-12-13 06:55:17.999235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:125768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.853 [2024-12-13 06:55:17.999249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.853 [2024-12-13 06:55:17.999279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:125776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.853 [2024-12-13 06:55:17.999292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.853 [2024-12-13 06:55:17.999306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:125784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.853 [2024-12-13 06:55:17.999319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.853 [2024-12-13 06:55:17.999340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:125792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.853 [2024-12-13 06:55:17.999353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.853 [2024-12-13 06:55:17.999368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:125800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.853 [2024-12-13 06:55:17.999380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.853 [2024-12-13 06:55:17.999406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:125808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.853 [2024-12-13 06:55:17.999422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.853 [2024-12-13 06:55:17.999436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:125816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.853 [2024-12-13 06:55:17.999449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.853 [2024-12-13 06:55:17.999463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:125824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.853 [2024-12-13 06:55:17.999476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.853 [2024-12-13 06:55:17.999491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:125832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.853 [2024-12-13 06:55:17.999503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.853 [2024-12-13 06:55:17.999518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:125840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.853 [2024-12-13 06:55:17.999531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.853 [2024-12-13 06:55:17.999545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:125848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.853 [2024-12-13 06:55:17.999558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.853 [2024-12-13 06:55:17.999572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:125856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.853 [2024-12-13 06:55:17.999585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.853 [2024-12-13 06:55:17.999599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:125864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.853 [2024-12-13 06:55:17.999613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.853 [2024-12-13 06:55:17.999627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:125192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.853 [2024-12-13 06:55:17.999640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.853 [2024-12-13 06:55:17.999654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:125208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.853 [2024-12-13 06:55:17.999667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.853 [2024-12-13 06:55:17.999681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:125216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.853 [2024-12-13 06:55:17.999705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.853 [2024-12-13 06:55:17.999720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:125232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.853 [2024-12-13 06:55:17.999734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.853 [2024-12-13 06:55:17.999748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:125240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.853 [2024-12-13 06:55:17.999761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.853 [2024-12-13 06:55:17.999775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:125248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.853 [2024-12-13 06:55:17.999788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.853 [2024-12-13 06:55:17.999802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:125256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.853 [2024-12-13 06:55:17.999815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.853 [2024-12-13 06:55:17.999830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:125264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.853 [2024-12-13 06:55:17.999843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.853 [2024-12-13 06:55:17.999884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:125872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.853 [2024-12-13 06:55:17.999900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.853 [2024-12-13 06:55:17.999916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:125880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.853 [2024-12-13 06:55:17.999929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.853 [2024-12-13 06:55:17.999945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:125888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.853 [2024-12-13 06:55:17.999959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.853 [2024-12-13 06:55:17.999974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:125896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.853 [2024-12-13 06:55:17.999988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.853 [2024-12-13 06:55:18.000003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:125904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.853 [2024-12-13 06:55:18.000017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.853 [2024-12-13 06:55:18.000033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:125912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.853 [2024-12-13 06:55:18.000047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.853 [2024-12-13 06:55:18.000062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:125920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.853 [2024-12-13 06:55:18.000076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.853 [2024-12-13 06:55:18.000098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:125928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.853 [2024-12-13 06:55:18.000113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.853 [2024-12-13 06:55:18.000129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:125936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.853 [2024-12-13 06:55:18.000143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.853 [2024-12-13 06:55:18.000158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:125944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.853 [2024-12-13 06:55:18.000172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.853 [2024-12-13 06:55:18.000201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:125952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.853 [2024-12-13 06:55:18.000214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.854 [2024-12-13 06:55:18.000229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:125960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.854 [2024-12-13 06:55:18.000241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.854 [2024-12-13 06:55:18.000256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:125968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.854 [2024-12-13 06:55:18.000268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.854 [2024-12-13 06:55:18.000283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:125976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.854 [2024-12-13 06:55:18.000296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.854 [2024-12-13 06:55:18.000311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:125984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.854 [2024-12-13 06:55:18.000324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.854 [2024-12-13 06:55:18.000339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:125288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.854 [2024-12-13 06:55:18.000352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.854 [2024-12-13 06:55:18.000378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:125320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.854 [2024-12-13 06:55:18.000402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.854 [2024-12-13 06:55:18.000420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:125328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.854 [2024-12-13 06:55:18.000434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.854 [2024-12-13 06:55:18.000448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:125336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.854 [2024-12-13 06:55:18.000462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.854 [2024-12-13 06:55:18.000476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:125344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.854 [2024-12-13 06:55:18.000496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.854 [2024-12-13 06:55:18.000512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:125360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.854 [2024-12-13 06:55:18.000525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.854 [2024-12-13 06:55:18.000540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:125384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.854 [2024-12-13 06:55:18.000553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.854 [2024-12-13 06:55:18.000568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:125400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.854 [2024-12-13 06:55:18.000582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.854 [2024-12-13 06:55:18.000597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:125992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.854 [2024-12-13 06:55:18.000610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.854 [2024-12-13 06:55:18.000625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:126000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.854 [2024-12-13 06:55:18.000638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.854 [2024-12-13 06:55:18.000653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:126008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.854 [2024-12-13 06:55:18.000666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.854 [2024-12-13 06:55:18.000681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:126016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.854 [2024-12-13 06:55:18.000694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.854 [2024-12-13 06:55:18.000709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:126024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.854 [2024-12-13 06:55:18.000722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.854 [2024-12-13 06:55:18.000737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:126032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.854 [2024-12-13 06:55:18.000765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.854 [2024-12-13 06:55:18.000779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:126040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.854 [2024-12-13 06:55:18.000792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.854 [2024-12-13 06:55:18.000811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:126048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.854 [2024-12-13 06:55:18.000824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.854 [2024-12-13 06:55:18.000839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:126056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.854 [2024-12-13 06:55:18.000852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.854 [2024-12-13 06:55:18.000872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:126064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.854 [2024-12-13 06:55:18.000886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.854 [2024-12-13 06:55:18.000900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:126072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.854 [2024-12-13 06:55:18.000913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.854 [2024-12-13 06:55:18.000928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:126080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.854 [2024-12-13 06:55:18.000941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.854 [2024-12-13 06:55:18.000955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:126088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.854 [2024-12-13 06:55:18.000968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.854 [2024-12-13 06:55:18.000982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:126096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.854 [2024-12-13 06:55:18.000995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.854 [2024-12-13 06:55:18.001009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:126104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.854 [2024-12-13 06:55:18.001022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.854 [2024-12-13 06:55:18.001036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:126112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.854 [2024-12-13 06:55:18.001049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.854 [2024-12-13 06:55:18.001064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:126120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.854 [2024-12-13 06:55:18.001077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.854 [2024-12-13 06:55:18.001091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:126128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.854 [2024-12-13 06:55:18.001104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.854 [2024-12-13 06:55:18.001118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:126136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.854 [2024-12-13 06:55:18.001132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.854 [2024-12-13 06:55:18.001146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:126144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.854 [2024-12-13 06:55:18.001159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.854 [2024-12-13 06:55:18.001174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:125464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.854 [2024-12-13 06:55:18.001186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.854 [2024-12-13 06:55:18.001201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:125472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.854 [2024-12-13 06:55:18.001220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.854 [2024-12-13 06:55:18.001235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:125480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.854 [2024-12-13 06:55:18.001249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.854 [2024-12-13 06:55:18.001265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:125488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.854 [2024-12-13 06:55:18.001278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.854 [2024-12-13 06:55:18.001293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:125496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.854 [2024-12-13 06:55:18.001305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.854 [2024-12-13 06:55:18.001319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:125520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.855 [2024-12-13 06:55:18.001332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.855 [2024-12-13 06:55:18.001346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:125528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.855 [2024-12-13 06:55:18.001359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.855 [2024-12-13 06:55:18.001385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:125536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.855 [2024-12-13 06:55:18.001399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.855 [2024-12-13 06:55:18.001414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:126152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.855 [2024-12-13 06:55:18.001427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.855 [2024-12-13 06:55:18.001441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:126160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.855 [2024-12-13 06:55:18.001454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.855 [2024-12-13 06:55:18.001468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:126168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.855 [2024-12-13 06:55:18.001481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.855 [2024-12-13 06:55:18.001495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:126176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.855 [2024-12-13 06:55:18.001508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.855 [2024-12-13 06:55:18.001522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:126184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.855 [2024-12-13 06:55:18.001535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.855 [2024-12-13 06:55:18.001549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:126192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.855 [2024-12-13 06:55:18.001562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.855 [2024-12-13 06:55:18.001576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:126200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.855 [2024-12-13 06:55:18.001596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.855 [2024-12-13 06:55:18.001611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:126208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.855 [2024-12-13 06:55:18.001624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.855 [2024-12-13 06:55:18.001638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:126216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.855 [2024-12-13 06:55:18.001652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.855 [2024-12-13 06:55:18.001666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:126224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.855 [2024-12-13 06:55:18.001679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.855 [2024-12-13 06:55:18.001694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:126232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.855 [2024-12-13 06:55:18.001707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.855 [2024-12-13 06:55:18.001725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:126240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.855 [2024-12-13 06:55:18.001739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.855 [2024-12-13 06:55:18.001753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:125544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.855 [2024-12-13 06:55:18.001766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.855 [2024-12-13 06:55:18.001781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:125568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.855 [2024-12-13 06:55:18.001793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.855 [2024-12-13 06:55:18.001808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:125584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.855 [2024-12-13 06:55:18.001821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.855 [2024-12-13 06:55:18.001835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:125600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.855 [2024-12-13 06:55:18.001848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.855 [2024-12-13 06:55:18.001862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:125608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.855 [2024-12-13 06:55:18.001875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.855 [2024-12-13 06:55:18.001889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:125632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.855 [2024-12-13 06:55:18.001902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.855 [2024-12-13 06:55:18.001916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:125648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.855 [2024-12-13 06:55:18.001929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.855 [2024-12-13 06:55:18.001949] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701440 is same with the state(5) to be set 00:16:24.855 [2024-12-13 06:55:18.001966] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:24.855 [2024-12-13 06:55:18.001976] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:24.855 [2024-12-13 06:55:18.001986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:125664 len:8 PRP1 0x0 PRP2 0x0 00:16:24.855 [2024-12-13 06:55:18.001998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.855 [2024-12-13 06:55:18.002041] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1701440 was disconnected and freed. reset controller. 00:16:24.855 [2024-12-13 06:55:18.002059] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:16:24.855 [2024-12-13 06:55:18.002112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:24.855 [2024-12-13 06:55:18.002133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.855 [2024-12-13 06:55:18.002147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:24.855 [2024-12-13 06:55:18.002160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.855 [2024-12-13 06:55:18.002173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:24.855 [2024-12-13 06:55:18.002186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.855 [2024-12-13 06:55:18.002198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:24.855 [2024-12-13 06:55:18.002211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.855 [2024-12-13 06:55:18.002223] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:24.855 [2024-12-13 06:55:18.002269] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1703820 (9): Bad file descriptor 00:16:24.855 [2024-12-13 06:55:18.004884] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:24.855 [2024-12-13 06:55:18.038834] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:24.855 [2024-12-13 06:55:22.565741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:100176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.855 [2024-12-13 06:55:22.565819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.855 [2024-12-13 06:55:22.565847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:100200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.855 [2024-12-13 06:55:22.565865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.855 [2024-12-13 06:55:22.565881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:99512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.855 [2024-12-13 06:55:22.565895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.855 [2024-12-13 06:55:22.565911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:99520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.855 [2024-12-13 06:55:22.565924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.855 [2024-12-13 06:55:22.565959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:99536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.855 [2024-12-13 06:55:22.565974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.855 [2024-12-13 06:55:22.565990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:99552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.855 [2024-12-13 06:55:22.566003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.856 [2024-12-13 06:55:22.566018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.856 [2024-12-13 06:55:22.566032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.856 [2024-12-13 06:55:22.566048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:99576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.856 [2024-12-13 06:55:22.566062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.856 [2024-12-13 06:55:22.566094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:99584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.856 [2024-12-13 06:55:22.566108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.856 [2024-12-13 06:55:22.566123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:99592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.856 [2024-12-13 06:55:22.566138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.856 [2024-12-13 06:55:22.566153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:100216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.856 [2024-12-13 06:55:22.566167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.856 [2024-12-13 06:55:22.566183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:100240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.856 [2024-12-13 06:55:22.566196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.856 [2024-12-13 06:55:22.566212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:100248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.856 [2024-12-13 06:55:22.566226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.856 [2024-12-13 06:55:22.566242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:100256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.856 [2024-12-13 06:55:22.566257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.856 [2024-12-13 06:55:22.566272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:100264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.856 [2024-12-13 06:55:22.566286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.856 [2024-12-13 06:55:22.566302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:100272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.856 [2024-12-13 06:55:22.566316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.856 [2024-12-13 06:55:22.566331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.856 [2024-12-13 06:55:22.566356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.856 [2024-12-13 06:55:22.566374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:100296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.856 [2024-12-13 06:55:22.566388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.856 [2024-12-13 06:55:22.566436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:99600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.856 [2024-12-13 06:55:22.566466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.856 [2024-12-13 06:55:22.566481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:99608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.856 [2024-12-13 06:55:22.566495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.856 [2024-12-13 06:55:22.566510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:99616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.856 [2024-12-13 06:55:22.566523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.856 [2024-12-13 06:55:22.566538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.856 [2024-12-13 06:55:22.566551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.856 [2024-12-13 06:55:22.566566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:99656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.856 [2024-12-13 06:55:22.566579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.856 [2024-12-13 06:55:22.566594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:99672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.856 [2024-12-13 06:55:22.566607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.856 [2024-12-13 06:55:22.566622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:99688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.856 [2024-12-13 06:55:22.566635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.856 [2024-12-13 06:55:22.566650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:99696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.856 [2024-12-13 06:55:22.566663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.856 [2024-12-13 06:55:22.566679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:100304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.856 [2024-12-13 06:55:22.566692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.856 [2024-12-13 06:55:22.566707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:100312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.856 [2024-12-13 06:55:22.566721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.856 [2024-12-13 06:55:22.566736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:100320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.856 [2024-12-13 06:55:22.566749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.856 [2024-12-13 06:55:22.566773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:100328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.856 [2024-12-13 06:55:22.566787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.856 [2024-12-13 06:55:22.566802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:100336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.856 [2024-12-13 06:55:22.566815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.856 [2024-12-13 06:55:22.566830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:100344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.856 [2024-12-13 06:55:22.566843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.856 [2024-12-13 06:55:22.566859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:100352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.856 [2024-12-13 06:55:22.566873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.856 [2024-12-13 06:55:22.566888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:100360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.856 [2024-12-13 06:55:22.566902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.856 [2024-12-13 06:55:22.566917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:100368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.856 [2024-12-13 06:55:22.566930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.856 [2024-12-13 06:55:22.566945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:100376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.856 [2024-12-13 06:55:22.566959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.856 [2024-12-13 06:55:22.566974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:100384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.856 [2024-12-13 06:55:22.566987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.856 [2024-12-13 06:55:22.567002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:100392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.856 [2024-12-13 06:55:22.567015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.857 [2024-12-13 06:55:22.567030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:100400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.857 [2024-12-13 06:55:22.567044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.857 [2024-12-13 06:55:22.567077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:99704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.857 [2024-12-13 06:55:22.567092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.857 [2024-12-13 06:55:22.567108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:99720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.857 [2024-12-13 06:55:22.567123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.857 [2024-12-13 06:55:22.567139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:99752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.857 [2024-12-13 06:55:22.567160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.857 [2024-12-13 06:55:22.567178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:99768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.857 [2024-12-13 06:55:22.567192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.857 [2024-12-13 06:55:22.567209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:99784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.857 [2024-12-13 06:55:22.567224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.857 [2024-12-13 06:55:22.567240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:99840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.857 [2024-12-13 06:55:22.567254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.857 [2024-12-13 06:55:22.567270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:99856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.857 [2024-12-13 06:55:22.567284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.857 [2024-12-13 06:55:22.567301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:99864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.857 [2024-12-13 06:55:22.567315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.857 [2024-12-13 06:55:22.567331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:100408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.857 [2024-12-13 06:55:22.567345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.857 [2024-12-13 06:55:22.567361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:100416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.857 [2024-12-13 06:55:22.567375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.857 [2024-12-13 06:55:22.567403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:100424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.857 [2024-12-13 06:55:22.567435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.857 [2024-12-13 06:55:22.567466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:100432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.857 [2024-12-13 06:55:22.567480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.857 [2024-12-13 06:55:22.567494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:100440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.857 [2024-12-13 06:55:22.567508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.857 [2024-12-13 06:55:22.567523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:100448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.857 [2024-12-13 06:55:22.567537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.857 [2024-12-13 06:55:22.567552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:100456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.857 [2024-12-13 06:55:22.567565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.857 [2024-12-13 06:55:22.567587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:100464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.857 [2024-12-13 06:55:22.567602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.857 [2024-12-13 06:55:22.567619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:100472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.857 [2024-12-13 06:55:22.567633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.857 [2024-12-13 06:55:22.567648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:100480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.857 [2024-12-13 06:55:22.567662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.857 [2024-12-13 06:55:22.567677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:100488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.857 [2024-12-13 06:55:22.567690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.857 [2024-12-13 06:55:22.567705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:100496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.857 [2024-12-13 06:55:22.567718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.857 [2024-12-13 06:55:22.567734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:100504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.857 [2024-12-13 06:55:22.567748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.857 [2024-12-13 06:55:22.567763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:100512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.857 [2024-12-13 06:55:22.567776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.857 [2024-12-13 06:55:22.567791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:100520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.857 [2024-12-13 06:55:22.567804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.857 [2024-12-13 06:55:22.567819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:100528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.857 [2024-12-13 06:55:22.567833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.857 [2024-12-13 06:55:22.567848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:100536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.857 [2024-12-13 06:55:22.567890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.857 [2024-12-13 06:55:22.567907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:100544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.857 [2024-12-13 06:55:22.567922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.857 [2024-12-13 06:55:22.567939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:100552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.857 [2024-12-13 06:55:22.567953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.857 [2024-12-13 06:55:22.567968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:100560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.857 [2024-12-13 06:55:22.567983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.857 [2024-12-13 06:55:22.568007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:100568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.857 [2024-12-13 06:55:22.568022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.857 [2024-12-13 06:55:22.568039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:99880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.857 [2024-12-13 06:55:22.568053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.857 [2024-12-13 06:55:22.568069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:99888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.857 [2024-12-13 06:55:22.568083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.857 [2024-12-13 06:55:22.568099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:99928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.857 [2024-12-13 06:55:22.568114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.857 [2024-12-13 06:55:22.568132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:99944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.858 [2024-12-13 06:55:22.568147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.858 [2024-12-13 06:55:22.568164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:99960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.858 [2024-12-13 06:55:22.568179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.858 [2024-12-13 06:55:22.568195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:99984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.858 [2024-12-13 06:55:22.568209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.858 [2024-12-13 06:55:22.568225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:100016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.858 [2024-12-13 06:55:22.568239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.858 [2024-12-13 06:55:22.568255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:100056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.858 [2024-12-13 06:55:22.568269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.858 [2024-12-13 06:55:22.568285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:100576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.858 [2024-12-13 06:55:22.568299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.858 [2024-12-13 06:55:22.568315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:100584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.858 [2024-12-13 06:55:22.568330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.858 [2024-12-13 06:55:22.568346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:100592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.858 [2024-12-13 06:55:22.568360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.858 [2024-12-13 06:55:22.568392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:100600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.858 [2024-12-13 06:55:22.568416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.858 [2024-12-13 06:55:22.568433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:100608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.858 [2024-12-13 06:55:22.568462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.858 [2024-12-13 06:55:22.568478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:100616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.858 [2024-12-13 06:55:22.568492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.858 [2024-12-13 06:55:22.568507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:100624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.858 [2024-12-13 06:55:22.568521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.858 [2024-12-13 06:55:22.568536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.858 [2024-12-13 06:55:22.568550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.858 [2024-12-13 06:55:22.568565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:100640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.858 [2024-12-13 06:55:22.568579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.858 [2024-12-13 06:55:22.568595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:100648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.858 [2024-12-13 06:55:22.568608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.858 [2024-12-13 06:55:22.568624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:100656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.858 [2024-12-13 06:55:22.568637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.858 [2024-12-13 06:55:22.568653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:100664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.858 [2024-12-13 06:55:22.568668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.858 [2024-12-13 06:55:22.568698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:100672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.858 [2024-12-13 06:55:22.568712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.858 [2024-12-13 06:55:22.568727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:100680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.858 [2024-12-13 06:55:22.568740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.858 [2024-12-13 06:55:22.568770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.858 [2024-12-13 06:55:22.568783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.858 [2024-12-13 06:55:22.568797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.858 [2024-12-13 06:55:22.568810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.858 [2024-12-13 06:55:22.568831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:100704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.858 [2024-12-13 06:55:22.568845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.858 [2024-12-13 06:55:22.568859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:100712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.858 [2024-12-13 06:55:22.568872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.858 [2024-12-13 06:55:22.568887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.858 [2024-12-13 06:55:22.568899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.858 [2024-12-13 06:55:22.568914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:100072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.858 [2024-12-13 06:55:22.568927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.858 [2024-12-13 06:55:22.568941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:100088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.858 [2024-12-13 06:55:22.568954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.858 [2024-12-13 06:55:22.568968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:100104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.858 [2024-12-13 06:55:22.568981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.858 [2024-12-13 06:55:22.568995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:100112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.858 [2024-12-13 06:55:22.569008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.858 [2024-12-13 06:55:22.569023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:100120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.858 [2024-12-13 06:55:22.569036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.858 [2024-12-13 06:55:22.569050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:100128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.858 [2024-12-13 06:55:22.569082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.858 [2024-12-13 06:55:22.569098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:100152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.858 [2024-12-13 06:55:22.569112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.858 [2024-12-13 06:55:22.569128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:100160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.858 [2024-12-13 06:55:22.569141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.858 [2024-12-13 06:55:22.569157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:100728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.858 [2024-12-13 06:55:22.569171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.858 [2024-12-13 06:55:22.569186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:100736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.859 [2024-12-13 06:55:22.569210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.859 [2024-12-13 06:55:22.569226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:100744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.859 [2024-12-13 06:55:22.569240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.859 [2024-12-13 06:55:22.569255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.859 [2024-12-13 06:55:22.569269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.859 [2024-12-13 06:55:22.569285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:100760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.859 [2024-12-13 06:55:22.569298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.859 [2024-12-13 06:55:22.569314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:100768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.859 [2024-12-13 06:55:22.569328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.859 [2024-12-13 06:55:22.569343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:100776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.859 [2024-12-13 06:55:22.569357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.859 [2024-12-13 06:55:22.569372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:100784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.859 [2024-12-13 06:55:22.569386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.859 [2024-12-13 06:55:22.569401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:100792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.859 [2024-12-13 06:55:22.569458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.859 [2024-12-13 06:55:22.569476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:100800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.859 [2024-12-13 06:55:22.569490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.859 [2024-12-13 06:55:22.569504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:100808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.859 [2024-12-13 06:55:22.569518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.859 [2024-12-13 06:55:22.569549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:100816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.859 [2024-12-13 06:55:22.569563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.859 [2024-12-13 06:55:22.569579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:100824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.859 [2024-12-13 06:55:22.569593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.859 [2024-12-13 06:55:22.569608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:100832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.859 [2024-12-13 06:55:22.569624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.859 [2024-12-13 06:55:22.569648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:100840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.859 [2024-12-13 06:55:22.569664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.859 [2024-12-13 06:55:22.569679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:100848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.859 [2024-12-13 06:55:22.569693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.859 [2024-12-13 06:55:22.569708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:100856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.859 [2024-12-13 06:55:22.569722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.859 [2024-12-13 06:55:22.569738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:100864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.859 [2024-12-13 06:55:22.569752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.859 [2024-12-13 06:55:22.569767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:100168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.859 [2024-12-13 06:55:22.569781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.859 [2024-12-13 06:55:22.569796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:100184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.859 [2024-12-13 06:55:22.569810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.859 [2024-12-13 06:55:22.569825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:100192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.859 [2024-12-13 06:55:22.569853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.859 [2024-12-13 06:55:22.569868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:100208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.859 [2024-12-13 06:55:22.569882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.859 [2024-12-13 06:55:22.569897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:100224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.859 [2024-12-13 06:55:22.569911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.859 [2024-12-13 06:55:22.569926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:100232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.859 [2024-12-13 06:55:22.569939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.859 [2024-12-13 06:55:22.569954] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1727b00 is same with the state(5) to be set 00:16:24.859 [2024-12-13 06:55:22.569970] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:24.859 [2024-12-13 06:55:22.569980] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:24.859 [2024-12-13 06:55:22.569991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100280 len:8 PRP1 0x0 PRP2 0x0 00:16:24.859 [2024-12-13 06:55:22.570004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.859 [2024-12-13 06:55:22.570048] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1727b00 was disconnected and freed. reset controller. 00:16:24.859 [2024-12-13 06:55:22.570089] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:16:24.859 [2024-12-13 06:55:22.570144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:24.859 [2024-12-13 06:55:22.570166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.859 [2024-12-13 06:55:22.570182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:24.859 [2024-12-13 06:55:22.570196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.859 [2024-12-13 06:55:22.570216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:24.859 [2024-12-13 06:55:22.570232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.859 [2024-12-13 06:55:22.570247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:24.859 [2024-12-13 06:55:22.570260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.859 [2024-12-13 06:55:22.570274] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:24.859 [2024-12-13 06:55:22.570322] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1703820 (9): Bad file descriptor 00:16:24.859 [2024-12-13 06:55:22.572789] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:24.859 [2024-12-13 06:55:22.599892] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:24.859 00:16:24.859 Latency(us) 00:16:24.859 [2024-12-13T06:55:29.378Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:24.859 [2024-12-13T06:55:29.378Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:24.859 Verification LBA range: start 0x0 length 0x4000 00:16:24.859 NVMe0n1 : 15.01 13323.77 52.05 306.33 0.00 9370.32 431.94 14417.92 00:16:24.859 [2024-12-13T06:55:29.378Z] =================================================================================================================== 00:16:24.859 [2024-12-13T06:55:29.378Z] Total : 13323.77 52.05 306.33 0.00 9370.32 431.94 14417.92 00:16:24.859 Received shutdown signal, test time was about 15.000000 seconds 00:16:24.859 00:16:24.859 Latency(us) 00:16:24.859 [2024-12-13T06:55:29.378Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:24.859 [2024-12-13T06:55:29.378Z] =================================================================================================================== 00:16:24.859 [2024-12-13T06:55:29.379Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:24.860 06:55:28 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:16:24.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:24.860 06:55:28 -- host/failover.sh@65 -- # count=3 00:16:24.860 06:55:28 -- host/failover.sh@67 -- # (( count != 3 )) 00:16:24.860 06:55:28 -- host/failover.sh@73 -- # bdevperf_pid=82148 00:16:24.860 06:55:28 -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:16:24.860 06:55:28 -- host/failover.sh@75 -- # waitforlisten 82148 /var/tmp/bdevperf.sock 00:16:24.860 06:55:28 -- common/autotest_common.sh@829 -- # '[' -z 82148 ']' 00:16:24.860 06:55:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:24.860 06:55:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:24.860 06:55:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:24.860 06:55:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:24.860 06:55:28 -- common/autotest_common.sh@10 -- # set +x 00:16:24.860 06:55:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:24.860 06:55:28 -- common/autotest_common.sh@862 -- # return 0 00:16:24.860 06:55:28 -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:24.860 [2024-12-13 06:55:29.037487] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:24.860 06:55:29 -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:16:24.860 [2024-12-13 06:55:29.269721] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:16:24.860 06:55:29 -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:25.119 NVMe0n1 00:16:25.119 06:55:29 -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:25.685 00:16:25.685 06:55:29 -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:25.943 00:16:25.943 06:55:30 -- host/failover.sh@82 -- # grep -q NVMe0 00:16:25.943 06:55:30 -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:26.217 06:55:30 -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:26.475 06:55:30 -- host/failover.sh@87 -- # sleep 3 00:16:29.761 06:55:33 -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:29.761 06:55:33 -- host/failover.sh@88 -- # grep -q NVMe0 00:16:29.761 06:55:34 -- host/failover.sh@90 -- # run_test_pid=82217 00:16:29.761 06:55:34 -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:29.761 06:55:34 -- host/failover.sh@92 -- # wait 82217 00:16:30.698 0 00:16:30.698 06:55:35 -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:30.698 [2024-12-13 06:55:28.493488] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:30.698 [2024-12-13 06:55:28.493610] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82148 ] 00:16:30.698 [2024-12-13 06:55:28.624345] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:30.698 [2024-12-13 06:55:28.658014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:30.698 [2024-12-13 06:55:30.736625] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:16:30.698 [2024-12-13 06:55:30.736756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:30.698 [2024-12-13 06:55:30.736781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:30.698 [2024-12-13 06:55:30.736800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:30.698 [2024-12-13 06:55:30.736814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:30.698 [2024-12-13 06:55:30.736828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:30.698 [2024-12-13 06:55:30.736842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:30.698 [2024-12-13 06:55:30.736871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:30.698 [2024-12-13 06:55:30.736884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:30.698 [2024-12-13 06:55:30.736897] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:30.698 [2024-12-13 06:55:30.736963] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:30.698 [2024-12-13 06:55:30.736993] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x762820 (9): Bad file descriptor 00:16:30.698 [2024-12-13 06:55:30.748448] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:30.698 Running I/O for 1 seconds... 00:16:30.698 00:16:30.698 Latency(us) 00:16:30.698 [2024-12-13T06:55:35.217Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:30.698 [2024-12-13T06:55:35.217Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:30.698 Verification LBA range: start 0x0 length 0x4000 00:16:30.698 NVMe0n1 : 1.01 13280.82 51.88 0.00 0.00 9584.01 1124.54 14417.92 00:16:30.698 [2024-12-13T06:55:35.217Z] =================================================================================================================== 00:16:30.698 [2024-12-13T06:55:35.217Z] Total : 13280.82 51.88 0.00 0.00 9584.01 1124.54 14417.92 00:16:30.698 06:55:35 -- host/failover.sh@95 -- # grep -q NVMe0 00:16:30.698 06:55:35 -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:30.956 06:55:35 -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:31.214 06:55:35 -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:31.214 06:55:35 -- host/failover.sh@99 -- # grep -q NVMe0 00:16:31.473 06:55:35 -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:32.042 06:55:36 -- host/failover.sh@101 -- # sleep 3 00:16:35.464 06:55:39 -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:35.464 06:55:39 -- host/failover.sh@103 -- # grep -q NVMe0 00:16:35.464 06:55:39 -- host/failover.sh@108 -- # killprocess 82148 00:16:35.464 06:55:39 -- common/autotest_common.sh@936 -- # '[' -z 82148 ']' 00:16:35.464 06:55:39 -- common/autotest_common.sh@940 -- # kill -0 82148 00:16:35.464 06:55:39 -- common/autotest_common.sh@941 -- # uname 00:16:35.464 06:55:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:35.464 06:55:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82148 00:16:35.464 06:55:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:35.464 06:55:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:35.464 06:55:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82148' 00:16:35.464 killing process with pid 82148 00:16:35.464 06:55:39 -- common/autotest_common.sh@955 -- # kill 82148 00:16:35.464 06:55:39 -- common/autotest_common.sh@960 -- # wait 82148 00:16:35.464 06:55:39 -- host/failover.sh@110 -- # sync 00:16:35.464 06:55:39 -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:35.723 06:55:39 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:16:35.723 06:55:39 -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:35.723 06:55:39 -- host/failover.sh@116 -- # nvmftestfini 00:16:35.723 06:55:39 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:35.723 06:55:39 -- nvmf/common.sh@116 -- # sync 00:16:35.723 06:55:39 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:35.723 06:55:39 -- nvmf/common.sh@119 -- # set +e 00:16:35.723 06:55:39 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:35.723 06:55:39 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:35.723 rmmod nvme_tcp 00:16:35.723 rmmod nvme_fabrics 00:16:35.723 rmmod nvme_keyring 00:16:35.723 06:55:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:35.723 06:55:40 -- nvmf/common.sh@123 -- # set -e 00:16:35.723 06:55:40 -- nvmf/common.sh@124 -- # return 0 00:16:35.723 06:55:40 -- nvmf/common.sh@477 -- # '[' -n 81884 ']' 00:16:35.723 06:55:40 -- nvmf/common.sh@478 -- # killprocess 81884 00:16:35.723 06:55:40 -- common/autotest_common.sh@936 -- # '[' -z 81884 ']' 00:16:35.723 06:55:40 -- common/autotest_common.sh@940 -- # kill -0 81884 00:16:35.723 06:55:40 -- common/autotest_common.sh@941 -- # uname 00:16:35.723 06:55:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:35.723 06:55:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81884 00:16:35.723 killing process with pid 81884 00:16:35.723 06:55:40 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:35.723 06:55:40 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:35.723 06:55:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81884' 00:16:35.723 06:55:40 -- common/autotest_common.sh@955 -- # kill 81884 00:16:35.723 06:55:40 -- common/autotest_common.sh@960 -- # wait 81884 00:16:35.723 06:55:40 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:35.723 06:55:40 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:35.723 06:55:40 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:35.723 06:55:40 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:35.723 06:55:40 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:35.723 06:55:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:35.723 06:55:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:35.723 06:55:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:35.982 06:55:40 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:35.982 ************************************ 00:16:35.982 END TEST nvmf_failover 00:16:35.982 ************************************ 00:16:35.982 00:16:35.982 real 0m32.312s 00:16:35.982 user 2m5.462s 00:16:35.982 sys 0m5.290s 00:16:35.982 06:55:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:35.982 06:55:40 -- common/autotest_common.sh@10 -- # set +x 00:16:35.982 06:55:40 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:16:35.982 06:55:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:35.982 06:55:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:35.982 06:55:40 -- common/autotest_common.sh@10 -- # set +x 00:16:35.982 ************************************ 00:16:35.982 START TEST nvmf_discovery 00:16:35.982 ************************************ 00:16:35.982 06:55:40 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:16:35.982 * Looking for test storage... 00:16:35.982 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:35.982 06:55:40 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:35.982 06:55:40 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:35.982 06:55:40 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:35.982 06:55:40 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:35.982 06:55:40 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:35.982 06:55:40 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:35.982 06:55:40 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:35.982 06:55:40 -- scripts/common.sh@335 -- # IFS=.-: 00:16:35.982 06:55:40 -- scripts/common.sh@335 -- # read -ra ver1 00:16:35.982 06:55:40 -- scripts/common.sh@336 -- # IFS=.-: 00:16:35.982 06:55:40 -- scripts/common.sh@336 -- # read -ra ver2 00:16:35.982 06:55:40 -- scripts/common.sh@337 -- # local 'op=<' 00:16:35.982 06:55:40 -- scripts/common.sh@339 -- # ver1_l=2 00:16:35.982 06:55:40 -- scripts/common.sh@340 -- # ver2_l=1 00:16:35.982 06:55:40 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:35.982 06:55:40 -- scripts/common.sh@343 -- # case "$op" in 00:16:35.982 06:55:40 -- scripts/common.sh@344 -- # : 1 00:16:35.982 06:55:40 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:35.982 06:55:40 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:35.982 06:55:40 -- scripts/common.sh@364 -- # decimal 1 00:16:35.982 06:55:40 -- scripts/common.sh@352 -- # local d=1 00:16:35.982 06:55:40 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:35.982 06:55:40 -- scripts/common.sh@354 -- # echo 1 00:16:35.982 06:55:40 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:35.982 06:55:40 -- scripts/common.sh@365 -- # decimal 2 00:16:35.982 06:55:40 -- scripts/common.sh@352 -- # local d=2 00:16:35.982 06:55:40 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:35.982 06:55:40 -- scripts/common.sh@354 -- # echo 2 00:16:35.982 06:55:40 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:35.982 06:55:40 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:35.982 06:55:40 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:35.982 06:55:40 -- scripts/common.sh@367 -- # return 0 00:16:35.982 06:55:40 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:35.982 06:55:40 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:35.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.982 --rc genhtml_branch_coverage=1 00:16:35.982 --rc genhtml_function_coverage=1 00:16:35.982 --rc genhtml_legend=1 00:16:35.982 --rc geninfo_all_blocks=1 00:16:35.982 --rc geninfo_unexecuted_blocks=1 00:16:35.982 00:16:35.982 ' 00:16:35.982 06:55:40 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:35.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.982 --rc genhtml_branch_coverage=1 00:16:35.982 --rc genhtml_function_coverage=1 00:16:35.982 --rc genhtml_legend=1 00:16:35.982 --rc geninfo_all_blocks=1 00:16:35.982 --rc geninfo_unexecuted_blocks=1 00:16:35.982 00:16:35.982 ' 00:16:35.982 06:55:40 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:35.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.982 --rc genhtml_branch_coverage=1 00:16:35.982 --rc genhtml_function_coverage=1 00:16:35.982 --rc genhtml_legend=1 00:16:35.982 --rc geninfo_all_blocks=1 00:16:35.982 --rc geninfo_unexecuted_blocks=1 00:16:35.982 00:16:35.982 ' 00:16:35.982 06:55:40 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:35.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.982 --rc genhtml_branch_coverage=1 00:16:35.982 --rc genhtml_function_coverage=1 00:16:35.982 --rc genhtml_legend=1 00:16:35.982 --rc geninfo_all_blocks=1 00:16:35.982 --rc geninfo_unexecuted_blocks=1 00:16:35.982 00:16:35.982 ' 00:16:35.982 06:55:40 -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:36.241 06:55:40 -- nvmf/common.sh@7 -- # uname -s 00:16:36.241 06:55:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:36.241 06:55:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:36.241 06:55:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:36.241 06:55:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:36.241 06:55:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:36.241 06:55:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:36.241 06:55:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:36.241 06:55:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:36.241 06:55:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:36.241 06:55:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:36.241 06:55:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:657f0c9c-3891-4064-9841-3d87a573b6e7 00:16:36.241 06:55:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=657f0c9c-3891-4064-9841-3d87a573b6e7 00:16:36.241 06:55:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:36.241 06:55:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:36.241 06:55:40 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:36.241 06:55:40 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:36.241 06:55:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:36.241 06:55:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:36.241 06:55:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:36.241 06:55:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.241 06:55:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.241 06:55:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.241 06:55:40 -- paths/export.sh@5 -- # export PATH 00:16:36.241 06:55:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.241 06:55:40 -- nvmf/common.sh@46 -- # : 0 00:16:36.241 06:55:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:36.241 06:55:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:36.241 06:55:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:36.241 06:55:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:36.241 06:55:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:36.241 06:55:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:36.241 06:55:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:36.241 06:55:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:36.241 06:55:40 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:16:36.241 06:55:40 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:16:36.241 06:55:40 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:16:36.241 06:55:40 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:16:36.241 06:55:40 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:16:36.241 06:55:40 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:16:36.241 06:55:40 -- host/discovery.sh@25 -- # nvmftestinit 00:16:36.241 06:55:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:36.241 06:55:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:36.241 06:55:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:36.241 06:55:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:36.241 06:55:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:36.241 06:55:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:36.241 06:55:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:36.241 06:55:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:36.241 06:55:40 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:36.241 06:55:40 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:36.241 06:55:40 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:36.241 06:55:40 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:36.241 06:55:40 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:36.241 06:55:40 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:36.241 06:55:40 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:36.241 06:55:40 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:36.241 06:55:40 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:36.241 06:55:40 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:36.241 06:55:40 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:36.241 06:55:40 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:36.241 06:55:40 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:36.241 06:55:40 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:36.241 06:55:40 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:36.241 06:55:40 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:36.241 06:55:40 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:36.241 06:55:40 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:36.241 06:55:40 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:36.242 06:55:40 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:36.242 Cannot find device "nvmf_tgt_br" 00:16:36.242 06:55:40 -- nvmf/common.sh@154 -- # true 00:16:36.242 06:55:40 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:36.242 Cannot find device "nvmf_tgt_br2" 00:16:36.242 06:55:40 -- nvmf/common.sh@155 -- # true 00:16:36.242 06:55:40 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:36.242 06:55:40 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:36.242 Cannot find device "nvmf_tgt_br" 00:16:36.242 06:55:40 -- nvmf/common.sh@157 -- # true 00:16:36.242 06:55:40 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:36.242 Cannot find device "nvmf_tgt_br2" 00:16:36.242 06:55:40 -- nvmf/common.sh@158 -- # true 00:16:36.242 06:55:40 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:36.242 06:55:40 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:36.242 06:55:40 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:36.242 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:36.242 06:55:40 -- nvmf/common.sh@161 -- # true 00:16:36.242 06:55:40 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:36.242 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:36.242 06:55:40 -- nvmf/common.sh@162 -- # true 00:16:36.242 06:55:40 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:36.242 06:55:40 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:36.242 06:55:40 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:36.242 06:55:40 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:36.242 06:55:40 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:36.242 06:55:40 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:36.242 06:55:40 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:36.242 06:55:40 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:36.242 06:55:40 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:36.242 06:55:40 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:36.242 06:55:40 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:36.242 06:55:40 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:36.242 06:55:40 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:36.242 06:55:40 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:36.501 06:55:40 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:36.501 06:55:40 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:36.501 06:55:40 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:36.501 06:55:40 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:36.501 06:55:40 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:36.501 06:55:40 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:36.501 06:55:40 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:36.501 06:55:40 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:36.501 06:55:40 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:36.501 06:55:40 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:36.501 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:36.501 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:16:36.501 00:16:36.501 --- 10.0.0.2 ping statistics --- 00:16:36.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:36.501 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:16:36.501 06:55:40 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:36.501 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:36.501 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.032 ms 00:16:36.501 00:16:36.501 --- 10.0.0.3 ping statistics --- 00:16:36.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:36.501 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:16:36.501 06:55:40 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:36.501 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:36.501 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:16:36.501 00:16:36.501 --- 10.0.0.1 ping statistics --- 00:16:36.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:36.501 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:16:36.501 06:55:40 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:36.501 06:55:40 -- nvmf/common.sh@421 -- # return 0 00:16:36.501 06:55:40 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:36.501 06:55:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:36.501 06:55:40 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:36.501 06:55:40 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:36.501 06:55:40 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:36.501 06:55:40 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:36.501 06:55:40 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:36.501 06:55:40 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:16:36.501 06:55:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:36.501 06:55:40 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:36.501 06:55:40 -- common/autotest_common.sh@10 -- # set +x 00:16:36.501 06:55:40 -- nvmf/common.sh@469 -- # nvmfpid=82493 00:16:36.501 06:55:40 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:36.501 06:55:40 -- nvmf/common.sh@470 -- # waitforlisten 82493 00:16:36.501 06:55:40 -- common/autotest_common.sh@829 -- # '[' -z 82493 ']' 00:16:36.501 06:55:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:36.501 06:55:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:36.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:36.501 06:55:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:36.501 06:55:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:36.501 06:55:40 -- common/autotest_common.sh@10 -- # set +x 00:16:36.501 [2024-12-13 06:55:40.924487] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:36.501 [2024-12-13 06:55:40.924587] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:36.760 [2024-12-13 06:55:41.065302] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:36.760 [2024-12-13 06:55:41.097599] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:36.760 [2024-12-13 06:55:41.097748] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:36.760 [2024-12-13 06:55:41.097761] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:36.760 [2024-12-13 06:55:41.097769] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:36.760 [2024-12-13 06:55:41.097791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:36.760 06:55:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:36.760 06:55:41 -- common/autotest_common.sh@862 -- # return 0 00:16:36.760 06:55:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:36.760 06:55:41 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:36.760 06:55:41 -- common/autotest_common.sh@10 -- # set +x 00:16:36.760 06:55:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:36.760 06:55:41 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:36.760 06:55:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.760 06:55:41 -- common/autotest_common.sh@10 -- # set +x 00:16:36.760 [2024-12-13 06:55:41.219668] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:36.760 06:55:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.760 06:55:41 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:16:36.760 06:55:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.760 06:55:41 -- common/autotest_common.sh@10 -- # set +x 00:16:36.760 [2024-12-13 06:55:41.231845] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:16:36.760 06:55:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.760 06:55:41 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:16:36.760 06:55:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.760 06:55:41 -- common/autotest_common.sh@10 -- # set +x 00:16:36.760 null0 00:16:36.760 06:55:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.760 06:55:41 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:16:36.760 06:55:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.760 06:55:41 -- common/autotest_common.sh@10 -- # set +x 00:16:36.760 null1 00:16:36.760 06:55:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.760 06:55:41 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:16:36.760 06:55:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.760 06:55:41 -- common/autotest_common.sh@10 -- # set +x 00:16:36.760 06:55:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.761 06:55:41 -- host/discovery.sh@45 -- # hostpid=82513 00:16:36.761 06:55:41 -- host/discovery.sh@46 -- # waitforlisten 82513 /tmp/host.sock 00:16:36.761 06:55:41 -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:16:36.761 06:55:41 -- common/autotest_common.sh@829 -- # '[' -z 82513 ']' 00:16:36.761 06:55:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:16:36.761 06:55:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:36.761 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:16:36.761 06:55:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:16:36.761 06:55:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:36.761 06:55:41 -- common/autotest_common.sh@10 -- # set +x 00:16:37.020 [2024-12-13 06:55:41.315545] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:37.020 [2024-12-13 06:55:41.315651] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82513 ] 00:16:37.020 [2024-12-13 06:55:41.457486] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:37.020 [2024-12-13 06:55:41.497786] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:37.020 [2024-12-13 06:55:41.497998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:37.958 06:55:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:37.958 06:55:42 -- common/autotest_common.sh@862 -- # return 0 00:16:37.958 06:55:42 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:37.958 06:55:42 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:16:37.958 06:55:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.958 06:55:42 -- common/autotest_common.sh@10 -- # set +x 00:16:37.958 06:55:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.958 06:55:42 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:16:37.958 06:55:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.958 06:55:42 -- common/autotest_common.sh@10 -- # set +x 00:16:37.958 06:55:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.958 06:55:42 -- host/discovery.sh@72 -- # notify_id=0 00:16:37.958 06:55:42 -- host/discovery.sh@78 -- # get_subsystem_names 00:16:37.958 06:55:42 -- host/discovery.sh@59 -- # sort 00:16:37.958 06:55:42 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:37.958 06:55:42 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:37.958 06:55:42 -- host/discovery.sh@59 -- # xargs 00:16:37.958 06:55:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.958 06:55:42 -- common/autotest_common.sh@10 -- # set +x 00:16:37.958 06:55:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.958 06:55:42 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:16:37.958 06:55:42 -- host/discovery.sh@79 -- # get_bdev_list 00:16:37.958 06:55:42 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:37.958 06:55:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.958 06:55:42 -- common/autotest_common.sh@10 -- # set +x 00:16:37.958 06:55:42 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:37.958 06:55:42 -- host/discovery.sh@55 -- # sort 00:16:37.958 06:55:42 -- host/discovery.sh@55 -- # xargs 00:16:37.958 06:55:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.958 06:55:42 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:16:37.958 06:55:42 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:16:37.958 06:55:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.958 06:55:42 -- common/autotest_common.sh@10 -- # set +x 00:16:37.958 06:55:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.958 06:55:42 -- host/discovery.sh@82 -- # get_subsystem_names 00:16:37.958 06:55:42 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:37.958 06:55:42 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:37.958 06:55:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.958 06:55:42 -- host/discovery.sh@59 -- # xargs 00:16:37.958 06:55:42 -- common/autotest_common.sh@10 -- # set +x 00:16:37.958 06:55:42 -- host/discovery.sh@59 -- # sort 00:16:37.958 06:55:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.217 06:55:42 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:16:38.217 06:55:42 -- host/discovery.sh@83 -- # get_bdev_list 00:16:38.217 06:55:42 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:38.217 06:55:42 -- host/discovery.sh@55 -- # sort 00:16:38.217 06:55:42 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:38.217 06:55:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.217 06:55:42 -- common/autotest_common.sh@10 -- # set +x 00:16:38.217 06:55:42 -- host/discovery.sh@55 -- # xargs 00:16:38.217 06:55:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.217 06:55:42 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:16:38.217 06:55:42 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:16:38.217 06:55:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.217 06:55:42 -- common/autotest_common.sh@10 -- # set +x 00:16:38.217 06:55:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.217 06:55:42 -- host/discovery.sh@86 -- # get_subsystem_names 00:16:38.217 06:55:42 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:38.217 06:55:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.217 06:55:42 -- common/autotest_common.sh@10 -- # set +x 00:16:38.217 06:55:42 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:38.217 06:55:42 -- host/discovery.sh@59 -- # sort 00:16:38.217 06:55:42 -- host/discovery.sh@59 -- # xargs 00:16:38.217 06:55:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.217 06:55:42 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:16:38.217 06:55:42 -- host/discovery.sh@87 -- # get_bdev_list 00:16:38.217 06:55:42 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:38.217 06:55:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.217 06:55:42 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:38.217 06:55:42 -- common/autotest_common.sh@10 -- # set +x 00:16:38.217 06:55:42 -- host/discovery.sh@55 -- # sort 00:16:38.217 06:55:42 -- host/discovery.sh@55 -- # xargs 00:16:38.217 06:55:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.217 06:55:42 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:16:38.217 06:55:42 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:38.217 06:55:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.217 06:55:42 -- common/autotest_common.sh@10 -- # set +x 00:16:38.217 [2024-12-13 06:55:42.688241] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:38.217 06:55:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.217 06:55:42 -- host/discovery.sh@92 -- # get_subsystem_names 00:16:38.217 06:55:42 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:38.217 06:55:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.217 06:55:42 -- common/autotest_common.sh@10 -- # set +x 00:16:38.217 06:55:42 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:38.217 06:55:42 -- host/discovery.sh@59 -- # sort 00:16:38.217 06:55:42 -- host/discovery.sh@59 -- # xargs 00:16:38.218 06:55:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.477 06:55:42 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:16:38.477 06:55:42 -- host/discovery.sh@93 -- # get_bdev_list 00:16:38.477 06:55:42 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:38.477 06:55:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.477 06:55:42 -- common/autotest_common.sh@10 -- # set +x 00:16:38.477 06:55:42 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:38.477 06:55:42 -- host/discovery.sh@55 -- # sort 00:16:38.477 06:55:42 -- host/discovery.sh@55 -- # xargs 00:16:38.477 06:55:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.477 06:55:42 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:16:38.477 06:55:42 -- host/discovery.sh@94 -- # get_notification_count 00:16:38.477 06:55:42 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:16:38.477 06:55:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.477 06:55:42 -- host/discovery.sh@74 -- # jq '. | length' 00:16:38.477 06:55:42 -- common/autotest_common.sh@10 -- # set +x 00:16:38.477 06:55:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.477 06:55:42 -- host/discovery.sh@74 -- # notification_count=0 00:16:38.477 06:55:42 -- host/discovery.sh@75 -- # notify_id=0 00:16:38.477 06:55:42 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:16:38.477 06:55:42 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:16:38.477 06:55:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.477 06:55:42 -- common/autotest_common.sh@10 -- # set +x 00:16:38.477 06:55:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.477 06:55:42 -- host/discovery.sh@100 -- # sleep 1 00:16:39.049 [2024-12-13 06:55:43.344410] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:39.049 [2024-12-13 06:55:43.344456] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:39.049 [2024-12-13 06:55:43.344474] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:39.049 [2024-12-13 06:55:43.350452] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:16:39.049 [2024-12-13 06:55:43.405943] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:39.049 [2024-12-13 06:55:43.405985] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:39.618 06:55:43 -- host/discovery.sh@101 -- # get_subsystem_names 00:16:39.618 06:55:43 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:39.618 06:55:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.618 06:55:43 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:39.618 06:55:43 -- common/autotest_common.sh@10 -- # set +x 00:16:39.618 06:55:43 -- host/discovery.sh@59 -- # sort 00:16:39.618 06:55:43 -- host/discovery.sh@59 -- # xargs 00:16:39.618 06:55:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.618 06:55:43 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.618 06:55:43 -- host/discovery.sh@102 -- # get_bdev_list 00:16:39.618 06:55:43 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:39.618 06:55:43 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:39.618 06:55:43 -- host/discovery.sh@55 -- # xargs 00:16:39.618 06:55:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.618 06:55:43 -- host/discovery.sh@55 -- # sort 00:16:39.618 06:55:43 -- common/autotest_common.sh@10 -- # set +x 00:16:39.618 06:55:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.618 06:55:43 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:16:39.618 06:55:43 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:16:39.618 06:55:43 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:39.618 06:55:43 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:39.618 06:55:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.618 06:55:43 -- host/discovery.sh@63 -- # sort -n 00:16:39.618 06:55:43 -- common/autotest_common.sh@10 -- # set +x 00:16:39.618 06:55:43 -- host/discovery.sh@63 -- # xargs 00:16:39.618 06:55:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.618 06:55:44 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:16:39.618 06:55:44 -- host/discovery.sh@104 -- # get_notification_count 00:16:39.618 06:55:44 -- host/discovery.sh@74 -- # jq '. | length' 00:16:39.618 06:55:44 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:16:39.618 06:55:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.618 06:55:44 -- common/autotest_common.sh@10 -- # set +x 00:16:39.618 06:55:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.618 06:55:44 -- host/discovery.sh@74 -- # notification_count=1 00:16:39.618 06:55:44 -- host/discovery.sh@75 -- # notify_id=1 00:16:39.618 06:55:44 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:16:39.618 06:55:44 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:16:39.618 06:55:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.618 06:55:44 -- common/autotest_common.sh@10 -- # set +x 00:16:39.618 06:55:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.618 06:55:44 -- host/discovery.sh@109 -- # sleep 1 00:16:40.995 06:55:45 -- host/discovery.sh@110 -- # get_bdev_list 00:16:40.995 06:55:45 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:40.995 06:55:45 -- host/discovery.sh@55 -- # sort 00:16:40.995 06:55:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.995 06:55:45 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:40.995 06:55:45 -- common/autotest_common.sh@10 -- # set +x 00:16:40.995 06:55:45 -- host/discovery.sh@55 -- # xargs 00:16:40.995 06:55:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.995 06:55:45 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:40.995 06:55:45 -- host/discovery.sh@111 -- # get_notification_count 00:16:40.995 06:55:45 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:16:40.995 06:55:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.995 06:55:45 -- common/autotest_common.sh@10 -- # set +x 00:16:40.995 06:55:45 -- host/discovery.sh@74 -- # jq '. | length' 00:16:40.995 06:55:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.995 06:55:45 -- host/discovery.sh@74 -- # notification_count=1 00:16:40.995 06:55:45 -- host/discovery.sh@75 -- # notify_id=2 00:16:40.995 06:55:45 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:16:40.995 06:55:45 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:16:40.995 06:55:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.995 06:55:45 -- common/autotest_common.sh@10 -- # set +x 00:16:40.995 [2024-12-13 06:55:45.186859] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:40.995 [2024-12-13 06:55:45.187639] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:16:40.995 [2024-12-13 06:55:45.187724] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:40.995 06:55:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.995 06:55:45 -- host/discovery.sh@117 -- # sleep 1 00:16:40.995 [2024-12-13 06:55:45.193631] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:16:40.995 [2024-12-13 06:55:45.257939] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:40.995 [2024-12-13 06:55:45.257964] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:40.995 [2024-12-13 06:55:45.257987] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:16:41.930 06:55:46 -- host/discovery.sh@118 -- # get_subsystem_names 00:16:41.930 06:55:46 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:41.930 06:55:46 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:41.930 06:55:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.930 06:55:46 -- common/autotest_common.sh@10 -- # set +x 00:16:41.930 06:55:46 -- host/discovery.sh@59 -- # sort 00:16:41.930 06:55:46 -- host/discovery.sh@59 -- # xargs 00:16:41.930 06:55:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.930 06:55:46 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.930 06:55:46 -- host/discovery.sh@119 -- # get_bdev_list 00:16:41.930 06:55:46 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:41.930 06:55:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.930 06:55:46 -- common/autotest_common.sh@10 -- # set +x 00:16:41.930 06:55:46 -- host/discovery.sh@55 -- # sort 00:16:41.930 06:55:46 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:41.930 06:55:46 -- host/discovery.sh@55 -- # xargs 00:16:41.930 06:55:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.930 06:55:46 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:41.930 06:55:46 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:16:41.930 06:55:46 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:41.930 06:55:46 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:41.930 06:55:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.930 06:55:46 -- host/discovery.sh@63 -- # sort -n 00:16:41.930 06:55:46 -- common/autotest_common.sh@10 -- # set +x 00:16:41.930 06:55:46 -- host/discovery.sh@63 -- # xargs 00:16:41.930 06:55:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.930 06:55:46 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:16:41.930 06:55:46 -- host/discovery.sh@121 -- # get_notification_count 00:16:41.930 06:55:46 -- host/discovery.sh@74 -- # jq '. | length' 00:16:41.930 06:55:46 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:41.930 06:55:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.930 06:55:46 -- common/autotest_common.sh@10 -- # set +x 00:16:41.930 06:55:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.930 06:55:46 -- host/discovery.sh@74 -- # notification_count=0 00:16:41.930 06:55:46 -- host/discovery.sh@75 -- # notify_id=2 00:16:41.930 06:55:46 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:16:41.930 06:55:46 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:41.930 06:55:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.930 06:55:46 -- common/autotest_common.sh@10 -- # set +x 00:16:41.930 [2024-12-13 06:55:46.417285] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:16:41.930 [2024-12-13 06:55:46.417337] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:41.930 06:55:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.930 06:55:46 -- host/discovery.sh@127 -- # sleep 1 00:16:41.930 [2024-12-13 06:55:46.423279] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:16:41.930 [2024-12-13 06:55:46.423330] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:16:41.930 [2024-12-13 06:55:46.423481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:41.930 [2024-12-13 06:55:46.423520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.930 [2024-12-13 06:55:46.423534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:41.930 [2024-12-13 06:55:46.423543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.930 [2024-12-13 06:55:46.423553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:41.930 [2024-12-13 06:55:46.423562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.930 [2024-12-13 06:55:46.423572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:41.930 [2024-12-13 06:55:46.423581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.931 [2024-12-13 06:55:46.423590] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e61f0 is same with the state(5) to be set 00:16:43.308 06:55:47 -- host/discovery.sh@128 -- # get_subsystem_names 00:16:43.308 06:55:47 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:43.308 06:55:47 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:43.308 06:55:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.308 06:55:47 -- common/autotest_common.sh@10 -- # set +x 00:16:43.308 06:55:47 -- host/discovery.sh@59 -- # sort 00:16:43.308 06:55:47 -- host/discovery.sh@59 -- # xargs 00:16:43.308 06:55:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.308 06:55:47 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.308 06:55:47 -- host/discovery.sh@129 -- # get_bdev_list 00:16:43.308 06:55:47 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:43.308 06:55:47 -- host/discovery.sh@55 -- # sort 00:16:43.308 06:55:47 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:43.308 06:55:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.308 06:55:47 -- host/discovery.sh@55 -- # xargs 00:16:43.308 06:55:47 -- common/autotest_common.sh@10 -- # set +x 00:16:43.308 06:55:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.308 06:55:47 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:43.308 06:55:47 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:16:43.308 06:55:47 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:43.308 06:55:47 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:43.308 06:55:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.308 06:55:47 -- host/discovery.sh@63 -- # sort -n 00:16:43.308 06:55:47 -- common/autotest_common.sh@10 -- # set +x 00:16:43.308 06:55:47 -- host/discovery.sh@63 -- # xargs 00:16:43.308 06:55:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.308 06:55:47 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:16:43.308 06:55:47 -- host/discovery.sh@131 -- # get_notification_count 00:16:43.308 06:55:47 -- host/discovery.sh@74 -- # jq '. | length' 00:16:43.308 06:55:47 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:43.308 06:55:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.309 06:55:47 -- common/autotest_common.sh@10 -- # set +x 00:16:43.309 06:55:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.309 06:55:47 -- host/discovery.sh@74 -- # notification_count=0 00:16:43.309 06:55:47 -- host/discovery.sh@75 -- # notify_id=2 00:16:43.309 06:55:47 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:16:43.309 06:55:47 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:16:43.309 06:55:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.309 06:55:47 -- common/autotest_common.sh@10 -- # set +x 00:16:43.309 06:55:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.309 06:55:47 -- host/discovery.sh@135 -- # sleep 1 00:16:44.245 06:55:48 -- host/discovery.sh@136 -- # get_subsystem_names 00:16:44.245 06:55:48 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:44.245 06:55:48 -- host/discovery.sh@59 -- # sort 00:16:44.245 06:55:48 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:44.245 06:55:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.245 06:55:48 -- host/discovery.sh@59 -- # xargs 00:16:44.245 06:55:48 -- common/autotest_common.sh@10 -- # set +x 00:16:44.245 06:55:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.245 06:55:48 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:16:44.245 06:55:48 -- host/discovery.sh@137 -- # get_bdev_list 00:16:44.245 06:55:48 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:44.245 06:55:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.245 06:55:48 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:44.245 06:55:48 -- common/autotest_common.sh@10 -- # set +x 00:16:44.245 06:55:48 -- host/discovery.sh@55 -- # xargs 00:16:44.245 06:55:48 -- host/discovery.sh@55 -- # sort 00:16:44.245 06:55:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.504 06:55:48 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:16:44.504 06:55:48 -- host/discovery.sh@138 -- # get_notification_count 00:16:44.504 06:55:48 -- host/discovery.sh@74 -- # jq '. | length' 00:16:44.504 06:55:48 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:44.504 06:55:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.504 06:55:48 -- common/autotest_common.sh@10 -- # set +x 00:16:44.504 06:55:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.504 06:55:48 -- host/discovery.sh@74 -- # notification_count=2 00:16:44.504 06:55:48 -- host/discovery.sh@75 -- # notify_id=4 00:16:44.504 06:55:48 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:16:44.504 06:55:48 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:44.504 06:55:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.504 06:55:48 -- common/autotest_common.sh@10 -- # set +x 00:16:45.440 [2024-12-13 06:55:49.837121] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:45.440 [2024-12-13 06:55:49.837159] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:45.440 [2024-12-13 06:55:49.837175] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:45.440 [2024-12-13 06:55:49.843167] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:16:45.440 [2024-12-13 06:55:49.902262] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:45.440 [2024-12-13 06:55:49.902501] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:16:45.440 06:55:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.440 06:55:49 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:45.440 06:55:49 -- common/autotest_common.sh@650 -- # local es=0 00:16:45.440 06:55:49 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:45.440 06:55:49 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:45.440 06:55:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:45.440 06:55:49 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:45.440 06:55:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:45.440 06:55:49 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:45.440 06:55:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.440 06:55:49 -- common/autotest_common.sh@10 -- # set +x 00:16:45.440 request: 00:16:45.440 { 00:16:45.440 "name": "nvme", 00:16:45.440 "trtype": "tcp", 00:16:45.440 "traddr": "10.0.0.2", 00:16:45.440 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:45.440 "adrfam": "ipv4", 00:16:45.440 "trsvcid": "8009", 00:16:45.440 "wait_for_attach": true, 00:16:45.440 "method": "bdev_nvme_start_discovery", 00:16:45.440 "req_id": 1 00:16:45.440 } 00:16:45.440 Got JSON-RPC error response 00:16:45.440 response: 00:16:45.440 { 00:16:45.440 "code": -17, 00:16:45.440 "message": "File exists" 00:16:45.440 } 00:16:45.440 06:55:49 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:45.440 06:55:49 -- common/autotest_common.sh@653 -- # es=1 00:16:45.440 06:55:49 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:45.440 06:55:49 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:45.440 06:55:49 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:45.440 06:55:49 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:16:45.440 06:55:49 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:45.440 06:55:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.440 06:55:49 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:45.440 06:55:49 -- host/discovery.sh@67 -- # sort 00:16:45.440 06:55:49 -- common/autotest_common.sh@10 -- # set +x 00:16:45.440 06:55:49 -- host/discovery.sh@67 -- # xargs 00:16:45.440 06:55:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.700 06:55:49 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:16:45.700 06:55:49 -- host/discovery.sh@147 -- # get_bdev_list 00:16:45.700 06:55:49 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:45.700 06:55:49 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:45.700 06:55:49 -- host/discovery.sh@55 -- # xargs 00:16:45.700 06:55:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.700 06:55:49 -- common/autotest_common.sh@10 -- # set +x 00:16:45.700 06:55:49 -- host/discovery.sh@55 -- # sort 00:16:45.700 06:55:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.700 06:55:50 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:45.700 06:55:50 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:45.700 06:55:50 -- common/autotest_common.sh@650 -- # local es=0 00:16:45.700 06:55:50 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:45.700 06:55:50 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:45.700 06:55:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:45.700 06:55:50 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:45.700 06:55:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:45.700 06:55:50 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:45.700 06:55:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.700 06:55:50 -- common/autotest_common.sh@10 -- # set +x 00:16:45.700 request: 00:16:45.700 { 00:16:45.700 "name": "nvme_second", 00:16:45.700 "trtype": "tcp", 00:16:45.700 "traddr": "10.0.0.2", 00:16:45.700 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:45.700 "adrfam": "ipv4", 00:16:45.700 "trsvcid": "8009", 00:16:45.700 "wait_for_attach": true, 00:16:45.700 "method": "bdev_nvme_start_discovery", 00:16:45.700 "req_id": 1 00:16:45.700 } 00:16:45.700 Got JSON-RPC error response 00:16:45.700 response: 00:16:45.700 { 00:16:45.700 "code": -17, 00:16:45.700 "message": "File exists" 00:16:45.700 } 00:16:45.700 06:55:50 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:45.700 06:55:50 -- common/autotest_common.sh@653 -- # es=1 00:16:45.700 06:55:50 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:45.700 06:55:50 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:45.700 06:55:50 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:45.700 06:55:50 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:16:45.700 06:55:50 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:45.700 06:55:50 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:45.700 06:55:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.700 06:55:50 -- common/autotest_common.sh@10 -- # set +x 00:16:45.700 06:55:50 -- host/discovery.sh@67 -- # xargs 00:16:45.700 06:55:50 -- host/discovery.sh@67 -- # sort 00:16:45.700 06:55:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.700 06:55:50 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:16:45.700 06:55:50 -- host/discovery.sh@153 -- # get_bdev_list 00:16:45.700 06:55:50 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:45.700 06:55:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.700 06:55:50 -- common/autotest_common.sh@10 -- # set +x 00:16:45.700 06:55:50 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:45.700 06:55:50 -- host/discovery.sh@55 -- # sort 00:16:45.700 06:55:50 -- host/discovery.sh@55 -- # xargs 00:16:45.700 06:55:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.700 06:55:50 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:45.700 06:55:50 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:45.700 06:55:50 -- common/autotest_common.sh@650 -- # local es=0 00:16:45.700 06:55:50 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:45.700 06:55:50 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:45.700 06:55:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:45.700 06:55:50 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:45.700 06:55:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:45.700 06:55:50 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:45.700 06:55:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.700 06:55:50 -- common/autotest_common.sh@10 -- # set +x 00:16:47.080 [2024-12-13 06:55:51.184477] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:47.080 [2024-12-13 06:55:51.184830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:47.080 [2024-12-13 06:55:51.184883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:47.080 [2024-12-13 06:55:51.184901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187f5c0 with addr=10.0.0.2, port=8010 00:16:47.080 [2024-12-13 06:55:51.184921] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:16:47.080 [2024-12-13 06:55:51.184949] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:16:47.080 [2024-12-13 06:55:51.184959] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:16:48.019 [2024-12-13 06:55:52.184480] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:48.019 [2024-12-13 06:55:52.184840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:48.019 [2024-12-13 06:55:52.184892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:48.019 [2024-12-13 06:55:52.184910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841bc0 with addr=10.0.0.2, port=8010 00:16:48.019 [2024-12-13 06:55:52.184962] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:16:48.019 [2024-12-13 06:55:52.184972] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:16:48.019 [2024-12-13 06:55:52.184982] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:16:48.956 [2024-12-13 06:55:53.184321] bdev_nvme.c:6802:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:16:48.956 request: 00:16:48.956 { 00:16:48.956 "name": "nvme_second", 00:16:48.956 "trtype": "tcp", 00:16:48.956 "traddr": "10.0.0.2", 00:16:48.956 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:48.956 "adrfam": "ipv4", 00:16:48.956 "trsvcid": "8010", 00:16:48.956 "attach_timeout_ms": 3000, 00:16:48.956 "method": "bdev_nvme_start_discovery", 00:16:48.956 "req_id": 1 00:16:48.956 } 00:16:48.956 Got JSON-RPC error response 00:16:48.956 response: 00:16:48.956 { 00:16:48.956 "code": -110, 00:16:48.956 "message": "Connection timed out" 00:16:48.956 } 00:16:48.956 06:55:53 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:48.956 06:55:53 -- common/autotest_common.sh@653 -- # es=1 00:16:48.956 06:55:53 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:48.956 06:55:53 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:48.956 06:55:53 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:48.956 06:55:53 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:16:48.956 06:55:53 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:48.956 06:55:53 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:48.956 06:55:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.956 06:55:53 -- host/discovery.sh@67 -- # sort 00:16:48.956 06:55:53 -- host/discovery.sh@67 -- # xargs 00:16:48.956 06:55:53 -- common/autotest_common.sh@10 -- # set +x 00:16:48.956 06:55:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.956 06:55:53 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:16:48.956 06:55:53 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:16:48.956 06:55:53 -- host/discovery.sh@162 -- # kill 82513 00:16:48.957 06:55:53 -- host/discovery.sh@163 -- # nvmftestfini 00:16:48.957 06:55:53 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:48.957 06:55:53 -- nvmf/common.sh@116 -- # sync 00:16:48.957 06:55:53 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:48.957 06:55:53 -- nvmf/common.sh@119 -- # set +e 00:16:48.957 06:55:53 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:48.957 06:55:53 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:48.957 rmmod nvme_tcp 00:16:48.957 rmmod nvme_fabrics 00:16:48.957 rmmod nvme_keyring 00:16:48.957 06:55:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:48.957 06:55:53 -- nvmf/common.sh@123 -- # set -e 00:16:48.957 06:55:53 -- nvmf/common.sh@124 -- # return 0 00:16:48.957 06:55:53 -- nvmf/common.sh@477 -- # '[' -n 82493 ']' 00:16:48.957 06:55:53 -- nvmf/common.sh@478 -- # killprocess 82493 00:16:48.957 06:55:53 -- common/autotest_common.sh@936 -- # '[' -z 82493 ']' 00:16:48.957 06:55:53 -- common/autotest_common.sh@940 -- # kill -0 82493 00:16:48.957 06:55:53 -- common/autotest_common.sh@941 -- # uname 00:16:48.957 06:55:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:48.957 06:55:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82493 00:16:48.957 killing process with pid 82493 00:16:48.957 06:55:53 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:48.957 06:55:53 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:48.957 06:55:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82493' 00:16:48.957 06:55:53 -- common/autotest_common.sh@955 -- # kill 82493 00:16:48.957 06:55:53 -- common/autotest_common.sh@960 -- # wait 82493 00:16:49.216 06:55:53 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:49.216 06:55:53 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:49.216 06:55:53 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:49.216 06:55:53 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:49.216 06:55:53 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:49.216 06:55:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:49.216 06:55:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:49.216 06:55:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:49.216 06:55:53 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:49.216 ************************************ 00:16:49.216 00:16:49.216 real 0m13.238s 00:16:49.216 user 0m25.948s 00:16:49.216 sys 0m2.124s 00:16:49.216 06:55:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:49.216 06:55:53 -- common/autotest_common.sh@10 -- # set +x 00:16:49.216 END TEST nvmf_discovery 00:16:49.216 ************************************ 00:16:49.216 06:55:53 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:16:49.216 06:55:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:49.216 06:55:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:49.216 06:55:53 -- common/autotest_common.sh@10 -- # set +x 00:16:49.216 ************************************ 00:16:49.216 START TEST nvmf_discovery_remove_ifc 00:16:49.216 ************************************ 00:16:49.216 06:55:53 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:16:49.216 * Looking for test storage... 00:16:49.216 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:49.216 06:55:53 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:49.216 06:55:53 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:49.216 06:55:53 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:49.476 06:55:53 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:49.476 06:55:53 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:49.476 06:55:53 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:49.477 06:55:53 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:49.477 06:55:53 -- scripts/common.sh@335 -- # IFS=.-: 00:16:49.477 06:55:53 -- scripts/common.sh@335 -- # read -ra ver1 00:16:49.477 06:55:53 -- scripts/common.sh@336 -- # IFS=.-: 00:16:49.477 06:55:53 -- scripts/common.sh@336 -- # read -ra ver2 00:16:49.477 06:55:53 -- scripts/common.sh@337 -- # local 'op=<' 00:16:49.477 06:55:53 -- scripts/common.sh@339 -- # ver1_l=2 00:16:49.477 06:55:53 -- scripts/common.sh@340 -- # ver2_l=1 00:16:49.477 06:55:53 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:49.477 06:55:53 -- scripts/common.sh@343 -- # case "$op" in 00:16:49.477 06:55:53 -- scripts/common.sh@344 -- # : 1 00:16:49.477 06:55:53 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:49.477 06:55:53 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:49.477 06:55:53 -- scripts/common.sh@364 -- # decimal 1 00:16:49.477 06:55:53 -- scripts/common.sh@352 -- # local d=1 00:16:49.477 06:55:53 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:49.477 06:55:53 -- scripts/common.sh@354 -- # echo 1 00:16:49.477 06:55:53 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:49.477 06:55:53 -- scripts/common.sh@365 -- # decimal 2 00:16:49.477 06:55:53 -- scripts/common.sh@352 -- # local d=2 00:16:49.477 06:55:53 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:49.477 06:55:53 -- scripts/common.sh@354 -- # echo 2 00:16:49.477 06:55:53 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:49.477 06:55:53 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:49.477 06:55:53 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:49.477 06:55:53 -- scripts/common.sh@367 -- # return 0 00:16:49.477 06:55:53 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:49.477 06:55:53 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:49.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:49.477 --rc genhtml_branch_coverage=1 00:16:49.477 --rc genhtml_function_coverage=1 00:16:49.477 --rc genhtml_legend=1 00:16:49.477 --rc geninfo_all_blocks=1 00:16:49.477 --rc geninfo_unexecuted_blocks=1 00:16:49.477 00:16:49.477 ' 00:16:49.477 06:55:53 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:49.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:49.477 --rc genhtml_branch_coverage=1 00:16:49.477 --rc genhtml_function_coverage=1 00:16:49.477 --rc genhtml_legend=1 00:16:49.477 --rc geninfo_all_blocks=1 00:16:49.477 --rc geninfo_unexecuted_blocks=1 00:16:49.477 00:16:49.477 ' 00:16:49.477 06:55:53 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:49.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:49.477 --rc genhtml_branch_coverage=1 00:16:49.477 --rc genhtml_function_coverage=1 00:16:49.477 --rc genhtml_legend=1 00:16:49.477 --rc geninfo_all_blocks=1 00:16:49.477 --rc geninfo_unexecuted_blocks=1 00:16:49.477 00:16:49.477 ' 00:16:49.477 06:55:53 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:49.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:49.477 --rc genhtml_branch_coverage=1 00:16:49.477 --rc genhtml_function_coverage=1 00:16:49.477 --rc genhtml_legend=1 00:16:49.477 --rc geninfo_all_blocks=1 00:16:49.477 --rc geninfo_unexecuted_blocks=1 00:16:49.477 00:16:49.477 ' 00:16:49.477 06:55:53 -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:49.477 06:55:53 -- nvmf/common.sh@7 -- # uname -s 00:16:49.477 06:55:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:49.477 06:55:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:49.477 06:55:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:49.477 06:55:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:49.477 06:55:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:49.477 06:55:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:49.477 06:55:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:49.477 06:55:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:49.477 06:55:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:49.477 06:55:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:49.477 06:55:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:657f0c9c-3891-4064-9841-3d87a573b6e7 00:16:49.477 06:55:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=657f0c9c-3891-4064-9841-3d87a573b6e7 00:16:49.477 06:55:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:49.477 06:55:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:49.477 06:55:53 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:49.477 06:55:53 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:49.477 06:55:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:49.477 06:55:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:49.477 06:55:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:49.477 06:55:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.477 06:55:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.477 06:55:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.477 06:55:53 -- paths/export.sh@5 -- # export PATH 00:16:49.477 06:55:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.477 06:55:53 -- nvmf/common.sh@46 -- # : 0 00:16:49.477 06:55:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:49.477 06:55:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:49.477 06:55:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:49.477 06:55:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:49.477 06:55:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:49.477 06:55:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:49.477 06:55:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:49.477 06:55:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:49.477 06:55:53 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:16:49.477 06:55:53 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:16:49.477 06:55:53 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:16:49.477 06:55:53 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:49.477 06:55:53 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:16:49.477 06:55:53 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:16:49.477 06:55:53 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:16:49.477 06:55:53 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:49.477 06:55:53 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:49.477 06:55:53 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:49.477 06:55:53 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:49.477 06:55:53 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:49.477 06:55:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:49.477 06:55:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:49.477 06:55:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:49.477 06:55:53 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:49.478 06:55:53 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:49.478 06:55:53 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:49.478 06:55:53 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:49.478 06:55:53 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:49.478 06:55:53 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:49.478 06:55:53 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:49.478 06:55:53 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:49.478 06:55:53 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:49.478 06:55:53 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:49.478 06:55:53 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:49.478 06:55:53 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:49.478 06:55:53 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:49.478 06:55:53 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:49.478 06:55:53 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:49.478 06:55:53 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:49.478 06:55:53 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:49.478 06:55:53 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:49.478 06:55:53 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:49.478 06:55:53 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:49.478 Cannot find device "nvmf_tgt_br" 00:16:49.478 06:55:53 -- nvmf/common.sh@154 -- # true 00:16:49.478 06:55:53 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:49.478 Cannot find device "nvmf_tgt_br2" 00:16:49.478 06:55:53 -- nvmf/common.sh@155 -- # true 00:16:49.478 06:55:53 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:49.478 06:55:53 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:49.478 Cannot find device "nvmf_tgt_br" 00:16:49.478 06:55:53 -- nvmf/common.sh@157 -- # true 00:16:49.478 06:55:53 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:49.478 Cannot find device "nvmf_tgt_br2" 00:16:49.478 06:55:53 -- nvmf/common.sh@158 -- # true 00:16:49.478 06:55:53 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:49.478 06:55:53 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:49.478 06:55:53 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:49.478 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:49.478 06:55:53 -- nvmf/common.sh@161 -- # true 00:16:49.478 06:55:53 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:49.478 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:49.478 06:55:53 -- nvmf/common.sh@162 -- # true 00:16:49.478 06:55:53 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:49.478 06:55:53 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:49.478 06:55:53 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:49.737 06:55:53 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:49.737 06:55:54 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:49.737 06:55:54 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:49.737 06:55:54 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:49.737 06:55:54 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:49.737 06:55:54 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:49.737 06:55:54 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:49.737 06:55:54 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:49.737 06:55:54 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:49.738 06:55:54 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:49.738 06:55:54 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:49.738 06:55:54 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:49.738 06:55:54 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:49.738 06:55:54 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:49.738 06:55:54 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:49.738 06:55:54 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:49.738 06:55:54 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:49.738 06:55:54 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:49.738 06:55:54 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:49.738 06:55:54 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:49.738 06:55:54 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:49.738 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:49.738 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:16:49.738 00:16:49.738 --- 10.0.0.2 ping statistics --- 00:16:49.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:49.738 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:16:49.738 06:55:54 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:49.738 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:49.738 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:16:49.738 00:16:49.738 --- 10.0.0.3 ping statistics --- 00:16:49.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:49.738 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:16:49.738 06:55:54 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:49.738 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:49.738 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:16:49.738 00:16:49.738 --- 10.0.0.1 ping statistics --- 00:16:49.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:49.738 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:16:49.738 06:55:54 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:49.738 06:55:54 -- nvmf/common.sh@421 -- # return 0 00:16:49.738 06:55:54 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:49.738 06:55:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:49.738 06:55:54 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:49.738 06:55:54 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:49.738 06:55:54 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:49.738 06:55:54 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:49.738 06:55:54 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:49.738 06:55:54 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:16:49.738 06:55:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:49.738 06:55:54 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:49.738 06:55:54 -- common/autotest_common.sh@10 -- # set +x 00:16:49.738 06:55:54 -- nvmf/common.sh@469 -- # nvmfpid=83019 00:16:49.738 06:55:54 -- nvmf/common.sh@470 -- # waitforlisten 83019 00:16:49.738 06:55:54 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:49.738 06:55:54 -- common/autotest_common.sh@829 -- # '[' -z 83019 ']' 00:16:49.738 06:55:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:49.738 06:55:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:49.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:49.738 06:55:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:49.738 06:55:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:49.738 06:55:54 -- common/autotest_common.sh@10 -- # set +x 00:16:49.738 [2024-12-13 06:55:54.229952] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:49.738 [2024-12-13 06:55:54.230040] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:49.997 [2024-12-13 06:55:54.366211] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:49.997 [2024-12-13 06:55:54.400340] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:49.997 [2024-12-13 06:55:54.400766] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:49.997 [2024-12-13 06:55:54.400789] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:49.997 [2024-12-13 06:55:54.400798] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:49.997 [2024-12-13 06:55:54.400851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:49.997 06:55:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:49.997 06:55:54 -- common/autotest_common.sh@862 -- # return 0 00:16:49.997 06:55:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:49.997 06:55:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:49.997 06:55:54 -- common/autotest_common.sh@10 -- # set +x 00:16:50.257 06:55:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:50.257 06:55:54 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:16:50.257 06:55:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.257 06:55:54 -- common/autotest_common.sh@10 -- # set +x 00:16:50.257 [2024-12-13 06:55:54.536058] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:50.257 [2024-12-13 06:55:54.544218] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:16:50.257 null0 00:16:50.257 [2024-12-13 06:55:54.576117] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:50.257 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:16:50.257 06:55:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.257 06:55:54 -- host/discovery_remove_ifc.sh@59 -- # hostpid=83044 00:16:50.257 06:55:54 -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:16:50.257 06:55:54 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 83044 /tmp/host.sock 00:16:50.257 06:55:54 -- common/autotest_common.sh@829 -- # '[' -z 83044 ']' 00:16:50.257 06:55:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:16:50.257 06:55:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:50.257 06:55:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:16:50.257 06:55:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:50.257 06:55:54 -- common/autotest_common.sh@10 -- # set +x 00:16:50.257 [2024-12-13 06:55:54.647227] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:50.257 [2024-12-13 06:55:54.647535] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83044 ] 00:16:50.518 [2024-12-13 06:55:54.792402] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:50.518 [2024-12-13 06:55:54.832391] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:50.518 [2024-12-13 06:55:54.832815] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:50.518 06:55:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:50.518 06:55:54 -- common/autotest_common.sh@862 -- # return 0 00:16:50.518 06:55:54 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:50.518 06:55:54 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:16:50.518 06:55:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.518 06:55:54 -- common/autotest_common.sh@10 -- # set +x 00:16:50.518 06:55:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.518 06:55:54 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:16:50.518 06:55:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.518 06:55:54 -- common/autotest_common.sh@10 -- # set +x 00:16:50.518 06:55:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.518 06:55:54 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:16:50.518 06:55:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.518 06:55:54 -- common/autotest_common.sh@10 -- # set +x 00:16:51.464 [2024-12-13 06:55:55.978893] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:51.464 [2024-12-13 06:55:55.978950] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:51.464 [2024-12-13 06:55:55.978968] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:51.723 [2024-12-13 06:55:55.984988] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:16:51.723 [2024-12-13 06:55:56.040648] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:16:51.723 [2024-12-13 06:55:56.040693] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:16:51.723 [2024-12-13 06:55:56.040716] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:16:51.723 [2024-12-13 06:55:56.040730] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:51.723 [2024-12-13 06:55:56.040753] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:51.723 06:55:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.723 06:55:56 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:16:51.723 06:55:56 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:51.723 06:55:56 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:51.723 06:55:56 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:51.723 06:55:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.723 06:55:56 -- common/autotest_common.sh@10 -- # set +x 00:16:51.723 06:55:56 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:51.723 [2024-12-13 06:55:56.047681] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x149b2c0 was disconnected and freed. delete nvme_qpair. 00:16:51.723 06:55:56 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:51.723 06:55:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.723 06:55:56 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:16:51.723 06:55:56 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:16:51.723 06:55:56 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:16:51.723 06:55:56 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:16:51.723 06:55:56 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:51.723 06:55:56 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:51.723 06:55:56 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:51.723 06:55:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.723 06:55:56 -- common/autotest_common.sh@10 -- # set +x 00:16:51.723 06:55:56 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:51.723 06:55:56 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:51.723 06:55:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.723 06:55:56 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:51.723 06:55:56 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:52.660 06:55:57 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:52.660 06:55:57 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:52.660 06:55:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.660 06:55:57 -- common/autotest_common.sh@10 -- # set +x 00:16:52.660 06:55:57 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:52.660 06:55:57 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:52.660 06:55:57 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:52.930 06:55:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.930 06:55:57 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:52.930 06:55:57 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:53.889 06:55:58 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:53.889 06:55:58 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:53.889 06:55:58 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:53.889 06:55:58 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:53.889 06:55:58 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:53.889 06:55:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.889 06:55:58 -- common/autotest_common.sh@10 -- # set +x 00:16:53.889 06:55:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.889 06:55:58 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:53.889 06:55:58 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:54.827 06:55:59 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:54.827 06:55:59 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:54.827 06:55:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.827 06:55:59 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:54.827 06:55:59 -- common/autotest_common.sh@10 -- # set +x 00:16:54.827 06:55:59 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:54.827 06:55:59 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:54.827 06:55:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.827 06:55:59 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:54.827 06:55:59 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:56.208 06:56:00 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:56.208 06:56:00 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:56.208 06:56:00 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:56.208 06:56:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.208 06:56:00 -- common/autotest_common.sh@10 -- # set +x 00:16:56.208 06:56:00 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:56.208 06:56:00 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:56.208 06:56:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.208 06:56:00 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:56.208 06:56:00 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:57.146 06:56:01 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:57.146 06:56:01 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:57.146 06:56:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.146 06:56:01 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:57.146 06:56:01 -- common/autotest_common.sh@10 -- # set +x 00:16:57.146 06:56:01 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:57.146 06:56:01 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:57.146 06:56:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.146 06:56:01 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:57.146 06:56:01 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:57.146 [2024-12-13 06:56:01.469116] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:16:57.146 [2024-12-13 06:56:01.469346] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:57.146 [2024-12-13 06:56:01.469442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.146 [2024-12-13 06:56:01.469456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:57.146 [2024-12-13 06:56:01.469467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.146 [2024-12-13 06:56:01.469478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:57.146 [2024-12-13 06:56:01.469488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.146 [2024-12-13 06:56:01.469498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:57.146 [2024-12-13 06:56:01.469506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.146 [2024-12-13 06:56:01.469516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:57.146 [2024-12-13 06:56:01.469526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.146 [2024-12-13 06:56:01.469535] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145f6c0 is same with the state(5) to be set 00:16:57.146 [2024-12-13 06:56:01.479113] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x145f6c0 (9): Bad file descriptor 00:16:57.146 [2024-12-13 06:56:01.489140] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:58.084 06:56:02 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:58.084 06:56:02 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:58.084 06:56:02 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:58.084 06:56:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.084 06:56:02 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:58.084 06:56:02 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:58.084 06:56:02 -- common/autotest_common.sh@10 -- # set +x 00:16:58.084 [2024-12-13 06:56:02.527482] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:16:59.464 [2024-12-13 06:56:03.551469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:17:00.402 [2024-12-13 06:56:04.575479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:17:00.402 [2024-12-13 06:56:04.575927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x145f6c0 with addr=10.0.0.2, port=4420 00:17:00.402 [2024-12-13 06:56:04.576290] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145f6c0 is same with the state(5) to be set 00:17:00.402 [2024-12-13 06:56:04.576633] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:00.402 [2024-12-13 06:56:04.576883] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:00.402 [2024-12-13 06:56:04.577139] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:00.402 [2024-12-13 06:56:04.577173] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:17:00.402 [2024-12-13 06:56:04.577972] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x145f6c0 (9): Bad file descriptor 00:17:00.402 [2024-12-13 06:56:04.578038] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:00.402 [2024-12-13 06:56:04.578090] bdev_nvme.c:6510:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:17:00.402 [2024-12-13 06:56:04.578157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:00.402 [2024-12-13 06:56:04.578187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.402 [2024-12-13 06:56:04.578223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:00.402 [2024-12-13 06:56:04.578244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.402 [2024-12-13 06:56:04.578266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:00.402 [2024-12-13 06:56:04.578286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.402 [2024-12-13 06:56:04.578307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:00.402 [2024-12-13 06:56:04.578327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.402 [2024-12-13 06:56:04.578375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:00.402 [2024-12-13 06:56:04.578398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.402 [2024-12-13 06:56:04.578419] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:17:00.402 [2024-12-13 06:56:04.578452] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x145fad0 (9): Bad file descriptor 00:17:00.402 [2024-12-13 06:56:04.579064] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:17:00.402 [2024-12-13 06:56:04.579094] nvme_ctrlr.c:1136:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:17:00.402 06:56:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.402 06:56:04 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:00.402 06:56:04 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:01.338 06:56:05 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:01.338 06:56:05 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:01.338 06:56:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.338 06:56:05 -- common/autotest_common.sh@10 -- # set +x 00:17:01.338 06:56:05 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:01.338 06:56:05 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:01.338 06:56:05 -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:01.338 06:56:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.338 06:56:05 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:17:01.338 06:56:05 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:01.338 06:56:05 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:01.338 06:56:05 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:17:01.338 06:56:05 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:01.338 06:56:05 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:01.338 06:56:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.338 06:56:05 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:01.338 06:56:05 -- common/autotest_common.sh@10 -- # set +x 00:17:01.338 06:56:05 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:01.338 06:56:05 -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:01.338 06:56:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.338 06:56:05 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:17:01.338 06:56:05 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:02.284 [2024-12-13 06:56:06.585179] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:17:02.284 [2024-12-13 06:56:06.585400] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:17:02.284 [2024-12-13 06:56:06.585464] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:02.284 [2024-12-13 06:56:06.591225] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:17:02.284 [2024-12-13 06:56:06.646724] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:17:02.284 [2024-12-13 06:56:06.647011] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:17:02.284 [2024-12-13 06:56:06.647077] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:17:02.284 [2024-12-13 06:56:06.647204] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:17:02.284 [2024-12-13 06:56:06.647267] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:17:02.284 [2024-12-13 06:56:06.653538] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x146c930 was disconnected and freed. delete nvme_qpair. 00:17:02.284 06:56:06 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:02.284 06:56:06 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:02.284 06:56:06 -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:02.284 06:56:06 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:02.284 06:56:06 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:02.284 06:56:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.284 06:56:06 -- common/autotest_common.sh@10 -- # set +x 00:17:02.284 06:56:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.284 06:56:06 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:17:02.284 06:56:06 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:17:02.284 06:56:06 -- host/discovery_remove_ifc.sh@90 -- # killprocess 83044 00:17:02.284 06:56:06 -- common/autotest_common.sh@936 -- # '[' -z 83044 ']' 00:17:02.284 06:56:06 -- common/autotest_common.sh@940 -- # kill -0 83044 00:17:02.284 06:56:06 -- common/autotest_common.sh@941 -- # uname 00:17:02.543 06:56:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:02.543 06:56:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83044 00:17:02.543 killing process with pid 83044 00:17:02.543 06:56:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:02.543 06:56:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:02.543 06:56:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83044' 00:17:02.543 06:56:06 -- common/autotest_common.sh@955 -- # kill 83044 00:17:02.543 06:56:06 -- common/autotest_common.sh@960 -- # wait 83044 00:17:02.543 06:56:06 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:17:02.543 06:56:06 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:02.543 06:56:06 -- nvmf/common.sh@116 -- # sync 00:17:02.543 06:56:07 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:02.543 06:56:07 -- nvmf/common.sh@119 -- # set +e 00:17:02.543 06:56:07 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:02.543 06:56:07 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:02.543 rmmod nvme_tcp 00:17:02.543 rmmod nvme_fabrics 00:17:02.543 rmmod nvme_keyring 00:17:02.802 06:56:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:02.802 06:56:07 -- nvmf/common.sh@123 -- # set -e 00:17:02.802 06:56:07 -- nvmf/common.sh@124 -- # return 0 00:17:02.802 06:56:07 -- nvmf/common.sh@477 -- # '[' -n 83019 ']' 00:17:02.802 06:56:07 -- nvmf/common.sh@478 -- # killprocess 83019 00:17:02.802 06:56:07 -- common/autotest_common.sh@936 -- # '[' -z 83019 ']' 00:17:02.802 06:56:07 -- common/autotest_common.sh@940 -- # kill -0 83019 00:17:02.802 06:56:07 -- common/autotest_common.sh@941 -- # uname 00:17:02.802 06:56:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:02.802 06:56:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83019 00:17:02.802 killing process with pid 83019 00:17:02.802 06:56:07 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:02.802 06:56:07 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:02.802 06:56:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83019' 00:17:02.802 06:56:07 -- common/autotest_common.sh@955 -- # kill 83019 00:17:02.802 06:56:07 -- common/autotest_common.sh@960 -- # wait 83019 00:17:02.802 06:56:07 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:02.802 06:56:07 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:02.802 06:56:07 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:02.802 06:56:07 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:02.802 06:56:07 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:02.802 06:56:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:02.802 06:56:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:02.802 06:56:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:02.802 06:56:07 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:02.802 00:17:02.802 real 0m13.668s 00:17:02.802 user 0m21.736s 00:17:02.802 sys 0m2.389s 00:17:02.802 ************************************ 00:17:02.802 END TEST nvmf_discovery_remove_ifc 00:17:02.802 ************************************ 00:17:02.802 06:56:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:02.802 06:56:07 -- common/autotest_common.sh@10 -- # set +x 00:17:03.061 06:56:07 -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:17:03.061 06:56:07 -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:17:03.061 06:56:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:03.061 06:56:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:03.062 06:56:07 -- common/autotest_common.sh@10 -- # set +x 00:17:03.062 ************************************ 00:17:03.062 START TEST nvmf_digest 00:17:03.062 ************************************ 00:17:03.062 06:56:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:17:03.062 * Looking for test storage... 00:17:03.062 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:03.062 06:56:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:03.062 06:56:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:03.062 06:56:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:03.062 06:56:07 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:03.062 06:56:07 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:03.062 06:56:07 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:03.062 06:56:07 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:03.062 06:56:07 -- scripts/common.sh@335 -- # IFS=.-: 00:17:03.062 06:56:07 -- scripts/common.sh@335 -- # read -ra ver1 00:17:03.062 06:56:07 -- scripts/common.sh@336 -- # IFS=.-: 00:17:03.062 06:56:07 -- scripts/common.sh@336 -- # read -ra ver2 00:17:03.062 06:56:07 -- scripts/common.sh@337 -- # local 'op=<' 00:17:03.062 06:56:07 -- scripts/common.sh@339 -- # ver1_l=2 00:17:03.062 06:56:07 -- scripts/common.sh@340 -- # ver2_l=1 00:17:03.062 06:56:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:03.062 06:56:07 -- scripts/common.sh@343 -- # case "$op" in 00:17:03.062 06:56:07 -- scripts/common.sh@344 -- # : 1 00:17:03.062 06:56:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:03.062 06:56:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:03.062 06:56:07 -- scripts/common.sh@364 -- # decimal 1 00:17:03.062 06:56:07 -- scripts/common.sh@352 -- # local d=1 00:17:03.062 06:56:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:03.062 06:56:07 -- scripts/common.sh@354 -- # echo 1 00:17:03.062 06:56:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:03.062 06:56:07 -- scripts/common.sh@365 -- # decimal 2 00:17:03.062 06:56:07 -- scripts/common.sh@352 -- # local d=2 00:17:03.062 06:56:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:03.062 06:56:07 -- scripts/common.sh@354 -- # echo 2 00:17:03.062 06:56:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:03.062 06:56:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:03.062 06:56:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:03.062 06:56:07 -- scripts/common.sh@367 -- # return 0 00:17:03.062 06:56:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:03.062 06:56:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:03.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.062 --rc genhtml_branch_coverage=1 00:17:03.062 --rc genhtml_function_coverage=1 00:17:03.062 --rc genhtml_legend=1 00:17:03.062 --rc geninfo_all_blocks=1 00:17:03.062 --rc geninfo_unexecuted_blocks=1 00:17:03.062 00:17:03.062 ' 00:17:03.062 06:56:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:03.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.062 --rc genhtml_branch_coverage=1 00:17:03.062 --rc genhtml_function_coverage=1 00:17:03.062 --rc genhtml_legend=1 00:17:03.062 --rc geninfo_all_blocks=1 00:17:03.062 --rc geninfo_unexecuted_blocks=1 00:17:03.062 00:17:03.062 ' 00:17:03.062 06:56:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:03.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.062 --rc genhtml_branch_coverage=1 00:17:03.062 --rc genhtml_function_coverage=1 00:17:03.062 --rc genhtml_legend=1 00:17:03.062 --rc geninfo_all_blocks=1 00:17:03.062 --rc geninfo_unexecuted_blocks=1 00:17:03.062 00:17:03.062 ' 00:17:03.062 06:56:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:03.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.062 --rc genhtml_branch_coverage=1 00:17:03.062 --rc genhtml_function_coverage=1 00:17:03.062 --rc genhtml_legend=1 00:17:03.062 --rc geninfo_all_blocks=1 00:17:03.062 --rc geninfo_unexecuted_blocks=1 00:17:03.062 00:17:03.062 ' 00:17:03.062 06:56:07 -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:03.062 06:56:07 -- nvmf/common.sh@7 -- # uname -s 00:17:03.062 06:56:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:03.062 06:56:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:03.062 06:56:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:03.062 06:56:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:03.062 06:56:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:03.062 06:56:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:03.062 06:56:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:03.062 06:56:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:03.062 06:56:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:03.062 06:56:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:03.062 06:56:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:657f0c9c-3891-4064-9841-3d87a573b6e7 00:17:03.062 06:56:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=657f0c9c-3891-4064-9841-3d87a573b6e7 00:17:03.062 06:56:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:03.062 06:56:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:03.062 06:56:07 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:03.062 06:56:07 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:03.062 06:56:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:03.062 06:56:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:03.062 06:56:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:03.062 06:56:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.062 06:56:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.062 06:56:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.062 06:56:07 -- paths/export.sh@5 -- # export PATH 00:17:03.062 06:56:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.062 06:56:07 -- nvmf/common.sh@46 -- # : 0 00:17:03.062 06:56:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:03.062 06:56:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:03.062 06:56:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:03.062 06:56:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:03.062 06:56:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:03.062 06:56:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:03.062 06:56:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:03.062 06:56:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:03.062 06:56:07 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:17:03.062 06:56:07 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:17:03.062 06:56:07 -- host/digest.sh@16 -- # runtime=2 00:17:03.062 06:56:07 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:17:03.062 06:56:07 -- host/digest.sh@132 -- # nvmftestinit 00:17:03.062 06:56:07 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:03.062 06:56:07 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:03.062 06:56:07 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:03.062 06:56:07 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:03.062 06:56:07 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:03.063 06:56:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:03.063 06:56:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:03.063 06:56:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:03.063 06:56:07 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:03.063 06:56:07 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:03.063 06:56:07 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:03.063 06:56:07 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:03.063 06:56:07 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:03.063 06:56:07 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:03.063 06:56:07 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:03.063 06:56:07 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:03.063 06:56:07 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:03.063 06:56:07 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:03.063 06:56:07 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:03.063 06:56:07 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:03.063 06:56:07 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:03.063 06:56:07 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:03.063 06:56:07 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:03.063 06:56:07 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:03.063 06:56:07 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:03.063 06:56:07 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:03.063 06:56:07 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:03.063 06:56:07 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:03.063 Cannot find device "nvmf_tgt_br" 00:17:03.063 06:56:07 -- nvmf/common.sh@154 -- # true 00:17:03.063 06:56:07 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:03.322 Cannot find device "nvmf_tgt_br2" 00:17:03.322 06:56:07 -- nvmf/common.sh@155 -- # true 00:17:03.322 06:56:07 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:03.322 06:56:07 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:03.322 Cannot find device "nvmf_tgt_br" 00:17:03.322 06:56:07 -- nvmf/common.sh@157 -- # true 00:17:03.322 06:56:07 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:03.322 Cannot find device "nvmf_tgt_br2" 00:17:03.322 06:56:07 -- nvmf/common.sh@158 -- # true 00:17:03.322 06:56:07 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:03.322 06:56:07 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:03.322 06:56:07 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:03.322 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:03.322 06:56:07 -- nvmf/common.sh@161 -- # true 00:17:03.322 06:56:07 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:03.322 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:03.322 06:56:07 -- nvmf/common.sh@162 -- # true 00:17:03.322 06:56:07 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:03.322 06:56:07 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:03.322 06:56:07 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:03.322 06:56:07 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:03.322 06:56:07 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:03.322 06:56:07 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:03.322 06:56:07 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:03.322 06:56:07 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:03.322 06:56:07 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:03.322 06:56:07 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:03.322 06:56:07 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:03.322 06:56:07 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:03.322 06:56:07 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:03.322 06:56:07 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:03.322 06:56:07 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:03.322 06:56:07 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:03.322 06:56:07 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:03.322 06:56:07 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:03.322 06:56:07 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:03.322 06:56:07 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:03.322 06:56:07 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:03.322 06:56:07 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:03.322 06:56:07 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:03.581 06:56:07 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:03.581 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:03.581 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:17:03.581 00:17:03.581 --- 10.0.0.2 ping statistics --- 00:17:03.581 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:03.581 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:17:03.581 06:56:07 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:03.581 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:03.581 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:17:03.581 00:17:03.581 --- 10.0.0.3 ping statistics --- 00:17:03.581 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:03.581 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:17:03.581 06:56:07 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:03.581 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:03.581 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:17:03.582 00:17:03.582 --- 10.0.0.1 ping statistics --- 00:17:03.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:03.582 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:17:03.582 06:56:07 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:03.582 06:56:07 -- nvmf/common.sh@421 -- # return 0 00:17:03.582 06:56:07 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:03.582 06:56:07 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:03.582 06:56:07 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:03.582 06:56:07 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:03.582 06:56:07 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:03.582 06:56:07 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:03.582 06:56:07 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:03.582 06:56:07 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:03.582 06:56:07 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:17:03.582 06:56:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:03.582 06:56:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:03.582 06:56:07 -- common/autotest_common.sh@10 -- # set +x 00:17:03.582 ************************************ 00:17:03.582 START TEST nvmf_digest_clean 00:17:03.582 ************************************ 00:17:03.582 06:56:07 -- common/autotest_common.sh@1114 -- # run_digest 00:17:03.582 06:56:07 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:17:03.582 06:56:07 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:03.582 06:56:07 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:03.582 06:56:07 -- common/autotest_common.sh@10 -- # set +x 00:17:03.582 06:56:07 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:03.582 06:56:07 -- nvmf/common.sh@469 -- # nvmfpid=83460 00:17:03.582 06:56:07 -- nvmf/common.sh@470 -- # waitforlisten 83460 00:17:03.582 06:56:07 -- common/autotest_common.sh@829 -- # '[' -z 83460 ']' 00:17:03.582 06:56:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:03.582 06:56:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:03.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:03.582 06:56:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:03.582 06:56:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:03.582 06:56:07 -- common/autotest_common.sh@10 -- # set +x 00:17:03.582 [2024-12-13 06:56:07.938824] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:03.582 [2024-12-13 06:56:07.939037] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:03.582 [2024-12-13 06:56:08.075629] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:03.841 [2024-12-13 06:56:08.115762] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:03.841 [2024-12-13 06:56:08.116179] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:03.841 [2024-12-13 06:56:08.116206] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:03.841 [2024-12-13 06:56:08.116217] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:03.841 [2024-12-13 06:56:08.116269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:03.841 06:56:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:03.841 06:56:08 -- common/autotest_common.sh@862 -- # return 0 00:17:03.841 06:56:08 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:03.841 06:56:08 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:03.841 06:56:08 -- common/autotest_common.sh@10 -- # set +x 00:17:03.841 06:56:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:03.841 06:56:08 -- host/digest.sh@120 -- # common_target_config 00:17:03.841 06:56:08 -- host/digest.sh@43 -- # rpc_cmd 00:17:03.841 06:56:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.841 06:56:08 -- common/autotest_common.sh@10 -- # set +x 00:17:03.841 null0 00:17:03.841 [2024-12-13 06:56:08.270094] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:03.841 [2024-12-13 06:56:08.294253] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:03.841 06:56:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.841 06:56:08 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:17:03.841 06:56:08 -- host/digest.sh@77 -- # local rw bs qd 00:17:03.842 06:56:08 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:03.842 06:56:08 -- host/digest.sh@80 -- # rw=randread 00:17:03.842 06:56:08 -- host/digest.sh@80 -- # bs=4096 00:17:03.842 06:56:08 -- host/digest.sh@80 -- # qd=128 00:17:03.842 06:56:08 -- host/digest.sh@82 -- # bperfpid=83479 00:17:03.842 06:56:08 -- host/digest.sh@83 -- # waitforlisten 83479 /var/tmp/bperf.sock 00:17:03.842 06:56:08 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:17:03.842 06:56:08 -- common/autotest_common.sh@829 -- # '[' -z 83479 ']' 00:17:03.842 06:56:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:03.842 06:56:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:03.842 06:56:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:03.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:03.842 06:56:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:03.842 06:56:08 -- common/autotest_common.sh@10 -- # set +x 00:17:03.842 [2024-12-13 06:56:08.351277] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:03.842 [2024-12-13 06:56:08.351655] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83479 ] 00:17:04.101 [2024-12-13 06:56:08.487946] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:04.101 [2024-12-13 06:56:08.521982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:05.038 06:56:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:05.038 06:56:09 -- common/autotest_common.sh@862 -- # return 0 00:17:05.038 06:56:09 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:17:05.038 06:56:09 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:17:05.038 06:56:09 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:05.297 06:56:09 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:05.297 06:56:09 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:05.556 nvme0n1 00:17:05.556 06:56:09 -- host/digest.sh@91 -- # bperf_py perform_tests 00:17:05.556 06:56:09 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:05.556 Running I/O for 2 seconds... 00:17:08.092 00:17:08.092 Latency(us) 00:17:08.092 [2024-12-13T06:56:12.611Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:08.092 [2024-12-13T06:56:12.611Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:17:08.092 nvme0n1 : 2.01 16062.55 62.74 0.00 0.00 7963.66 7179.17 18945.86 00:17:08.092 [2024-12-13T06:56:12.611Z] =================================================================================================================== 00:17:08.092 [2024-12-13T06:56:12.611Z] Total : 16062.55 62.74 0.00 0.00 7963.66 7179.17 18945.86 00:17:08.092 0 00:17:08.092 06:56:12 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:17:08.092 06:56:12 -- host/digest.sh@92 -- # get_accel_stats 00:17:08.092 06:56:12 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:08.092 06:56:12 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:08.092 | select(.opcode=="crc32c") 00:17:08.092 | "\(.module_name) \(.executed)"' 00:17:08.092 06:56:12 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:08.092 06:56:12 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:17:08.092 06:56:12 -- host/digest.sh@93 -- # exp_module=software 00:17:08.092 06:56:12 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:17:08.092 06:56:12 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:08.092 06:56:12 -- host/digest.sh@97 -- # killprocess 83479 00:17:08.092 06:56:12 -- common/autotest_common.sh@936 -- # '[' -z 83479 ']' 00:17:08.092 06:56:12 -- common/autotest_common.sh@940 -- # kill -0 83479 00:17:08.092 06:56:12 -- common/autotest_common.sh@941 -- # uname 00:17:08.092 06:56:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:08.092 06:56:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83479 00:17:08.092 killing process with pid 83479 00:17:08.092 Received shutdown signal, test time was about 2.000000 seconds 00:17:08.092 00:17:08.092 Latency(us) 00:17:08.092 [2024-12-13T06:56:12.611Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:08.092 [2024-12-13T06:56:12.611Z] =================================================================================================================== 00:17:08.092 [2024-12-13T06:56:12.611Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:08.092 06:56:12 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:08.092 06:56:12 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:08.092 06:56:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83479' 00:17:08.092 06:56:12 -- common/autotest_common.sh@955 -- # kill 83479 00:17:08.092 06:56:12 -- common/autotest_common.sh@960 -- # wait 83479 00:17:08.092 06:56:12 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:17:08.092 06:56:12 -- host/digest.sh@77 -- # local rw bs qd 00:17:08.092 06:56:12 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:08.092 06:56:12 -- host/digest.sh@80 -- # rw=randread 00:17:08.092 06:56:12 -- host/digest.sh@80 -- # bs=131072 00:17:08.092 06:56:12 -- host/digest.sh@80 -- # qd=16 00:17:08.092 06:56:12 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:17:08.092 06:56:12 -- host/digest.sh@82 -- # bperfpid=83539 00:17:08.092 06:56:12 -- host/digest.sh@83 -- # waitforlisten 83539 /var/tmp/bperf.sock 00:17:08.092 06:56:12 -- common/autotest_common.sh@829 -- # '[' -z 83539 ']' 00:17:08.092 06:56:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:08.092 06:56:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:08.092 06:56:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:08.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:08.092 06:56:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:08.092 06:56:12 -- common/autotest_common.sh@10 -- # set +x 00:17:08.092 [2024-12-13 06:56:12.495579] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:08.092 [2024-12-13 06:56:12.495861] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83539 ] 00:17:08.092 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:08.092 Zero copy mechanism will not be used. 00:17:08.351 [2024-12-13 06:56:12.627095] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:08.351 [2024-12-13 06:56:12.661193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:08.351 06:56:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:08.351 06:56:12 -- common/autotest_common.sh@862 -- # return 0 00:17:08.351 06:56:12 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:17:08.351 06:56:12 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:17:08.351 06:56:12 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:08.611 06:56:13 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:08.611 06:56:13 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:08.870 nvme0n1 00:17:08.870 06:56:13 -- host/digest.sh@91 -- # bperf_py perform_tests 00:17:08.870 06:56:13 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:09.129 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:09.129 Zero copy mechanism will not be used. 00:17:09.129 Running I/O for 2 seconds... 00:17:11.035 00:17:11.035 Latency(us) 00:17:11.035 [2024-12-13T06:56:15.554Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:11.035 [2024-12-13T06:56:15.554Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:17:11.035 nvme0n1 : 2.00 8046.83 1005.85 0.00 0.00 1985.40 1712.87 3232.12 00:17:11.035 [2024-12-13T06:56:15.554Z] =================================================================================================================== 00:17:11.035 [2024-12-13T06:56:15.554Z] Total : 8046.83 1005.85 0.00 0.00 1985.40 1712.87 3232.12 00:17:11.035 0 00:17:11.035 06:56:15 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:17:11.035 06:56:15 -- host/digest.sh@92 -- # get_accel_stats 00:17:11.035 06:56:15 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:11.035 06:56:15 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:11.035 | select(.opcode=="crc32c") 00:17:11.035 | "\(.module_name) \(.executed)"' 00:17:11.035 06:56:15 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:11.294 06:56:15 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:17:11.294 06:56:15 -- host/digest.sh@93 -- # exp_module=software 00:17:11.294 06:56:15 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:17:11.294 06:56:15 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:11.294 06:56:15 -- host/digest.sh@97 -- # killprocess 83539 00:17:11.294 06:56:15 -- common/autotest_common.sh@936 -- # '[' -z 83539 ']' 00:17:11.294 06:56:15 -- common/autotest_common.sh@940 -- # kill -0 83539 00:17:11.294 06:56:15 -- common/autotest_common.sh@941 -- # uname 00:17:11.294 06:56:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:11.294 06:56:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83539 00:17:11.294 killing process with pid 83539 00:17:11.294 Received shutdown signal, test time was about 2.000000 seconds 00:17:11.294 00:17:11.294 Latency(us) 00:17:11.294 [2024-12-13T06:56:15.813Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:11.294 [2024-12-13T06:56:15.814Z] =================================================================================================================== 00:17:11.295 [2024-12-13T06:56:15.814Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:11.295 06:56:15 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:11.295 06:56:15 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:11.295 06:56:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83539' 00:17:11.295 06:56:15 -- common/autotest_common.sh@955 -- # kill 83539 00:17:11.295 06:56:15 -- common/autotest_common.sh@960 -- # wait 83539 00:17:11.554 06:56:15 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:17:11.554 06:56:15 -- host/digest.sh@77 -- # local rw bs qd 00:17:11.554 06:56:15 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:11.554 06:56:15 -- host/digest.sh@80 -- # rw=randwrite 00:17:11.554 06:56:15 -- host/digest.sh@80 -- # bs=4096 00:17:11.554 06:56:15 -- host/digest.sh@80 -- # qd=128 00:17:11.554 06:56:15 -- host/digest.sh@82 -- # bperfpid=83587 00:17:11.554 06:56:15 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:17:11.554 06:56:15 -- host/digest.sh@83 -- # waitforlisten 83587 /var/tmp/bperf.sock 00:17:11.554 06:56:15 -- common/autotest_common.sh@829 -- # '[' -z 83587 ']' 00:17:11.554 06:56:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:11.554 06:56:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:11.554 06:56:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:11.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:11.554 06:56:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:11.554 06:56:15 -- common/autotest_common.sh@10 -- # set +x 00:17:11.554 [2024-12-13 06:56:15.957069] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:11.554 [2024-12-13 06:56:15.957397] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83587 ] 00:17:11.813 [2024-12-13 06:56:16.101302] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.813 [2024-12-13 06:56:16.134971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:11.813 06:56:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:11.813 06:56:16 -- common/autotest_common.sh@862 -- # return 0 00:17:11.813 06:56:16 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:17:11.813 06:56:16 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:17:11.813 06:56:16 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:12.072 06:56:16 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:12.072 06:56:16 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:12.331 nvme0n1 00:17:12.331 06:56:16 -- host/digest.sh@91 -- # bperf_py perform_tests 00:17:12.331 06:56:16 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:12.590 Running I/O for 2 seconds... 00:17:14.569 00:17:14.569 Latency(us) 00:17:14.569 [2024-12-13T06:56:19.088Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:14.569 [2024-12-13T06:56:19.088Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:14.569 nvme0n1 : 2.00 17379.25 67.89 0.00 0.00 7359.29 6374.87 16086.11 00:17:14.569 [2024-12-13T06:56:19.088Z] =================================================================================================================== 00:17:14.569 [2024-12-13T06:56:19.088Z] Total : 17379.25 67.89 0.00 0.00 7359.29 6374.87 16086.11 00:17:14.569 0 00:17:14.569 06:56:18 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:17:14.569 06:56:18 -- host/digest.sh@92 -- # get_accel_stats 00:17:14.569 06:56:18 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:14.569 06:56:18 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:14.569 06:56:18 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:14.569 | select(.opcode=="crc32c") 00:17:14.569 | "\(.module_name) \(.executed)"' 00:17:14.829 06:56:19 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:17:14.829 06:56:19 -- host/digest.sh@93 -- # exp_module=software 00:17:14.829 06:56:19 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:17:14.829 06:56:19 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:14.829 06:56:19 -- host/digest.sh@97 -- # killprocess 83587 00:17:14.829 06:56:19 -- common/autotest_common.sh@936 -- # '[' -z 83587 ']' 00:17:14.829 06:56:19 -- common/autotest_common.sh@940 -- # kill -0 83587 00:17:14.829 06:56:19 -- common/autotest_common.sh@941 -- # uname 00:17:14.829 06:56:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:14.829 06:56:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83587 00:17:14.829 killing process with pid 83587 00:17:14.829 Received shutdown signal, test time was about 2.000000 seconds 00:17:14.829 00:17:14.829 Latency(us) 00:17:14.829 [2024-12-13T06:56:19.348Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:14.829 [2024-12-13T06:56:19.348Z] =================================================================================================================== 00:17:14.829 [2024-12-13T06:56:19.348Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:14.829 06:56:19 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:14.829 06:56:19 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:14.829 06:56:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83587' 00:17:14.829 06:56:19 -- common/autotest_common.sh@955 -- # kill 83587 00:17:14.829 06:56:19 -- common/autotest_common.sh@960 -- # wait 83587 00:17:15.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:15.088 06:56:19 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:17:15.088 06:56:19 -- host/digest.sh@77 -- # local rw bs qd 00:17:15.088 06:56:19 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:15.088 06:56:19 -- host/digest.sh@80 -- # rw=randwrite 00:17:15.088 06:56:19 -- host/digest.sh@80 -- # bs=131072 00:17:15.088 06:56:19 -- host/digest.sh@80 -- # qd=16 00:17:15.088 06:56:19 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:17:15.088 06:56:19 -- host/digest.sh@82 -- # bperfpid=83641 00:17:15.088 06:56:19 -- host/digest.sh@83 -- # waitforlisten 83641 /var/tmp/bperf.sock 00:17:15.088 06:56:19 -- common/autotest_common.sh@829 -- # '[' -z 83641 ']' 00:17:15.088 06:56:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:15.088 06:56:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:15.088 06:56:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:15.088 06:56:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:15.088 06:56:19 -- common/autotest_common.sh@10 -- # set +x 00:17:15.088 [2024-12-13 06:56:19.472704] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:15.088 [2024-12-13 06:56:19.472981] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83641 ] 00:17:15.088 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:15.088 Zero copy mechanism will not be used. 00:17:15.088 [2024-12-13 06:56:19.606503] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:15.347 [2024-12-13 06:56:19.643287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:15.347 06:56:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:15.347 06:56:19 -- common/autotest_common.sh@862 -- # return 0 00:17:15.347 06:56:19 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:17:15.347 06:56:19 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:17:15.347 06:56:19 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:15.605 06:56:20 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:15.605 06:56:20 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:16.173 nvme0n1 00:17:16.173 06:56:20 -- host/digest.sh@91 -- # bperf_py perform_tests 00:17:16.173 06:56:20 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:16.173 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:16.173 Zero copy mechanism will not be used. 00:17:16.173 Running I/O for 2 seconds... 00:17:18.092 00:17:18.092 Latency(us) 00:17:18.092 [2024-12-13T06:56:22.611Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:18.092 [2024-12-13T06:56:22.611Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:17:18.092 nvme0n1 : 2.00 6942.34 867.79 0.00 0.00 2299.50 1980.97 5570.56 00:17:18.092 [2024-12-13T06:56:22.611Z] =================================================================================================================== 00:17:18.092 [2024-12-13T06:56:22.611Z] Total : 6942.34 867.79 0.00 0.00 2299.50 1980.97 5570.56 00:17:18.092 0 00:17:18.092 06:56:22 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:17:18.092 06:56:22 -- host/digest.sh@92 -- # get_accel_stats 00:17:18.092 06:56:22 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:18.092 06:56:22 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:18.092 | select(.opcode=="crc32c") 00:17:18.092 | "\(.module_name) \(.executed)"' 00:17:18.092 06:56:22 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:18.351 06:56:22 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:17:18.351 06:56:22 -- host/digest.sh@93 -- # exp_module=software 00:17:18.351 06:56:22 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:17:18.351 06:56:22 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:18.351 06:56:22 -- host/digest.sh@97 -- # killprocess 83641 00:17:18.351 06:56:22 -- common/autotest_common.sh@936 -- # '[' -z 83641 ']' 00:17:18.351 06:56:22 -- common/autotest_common.sh@940 -- # kill -0 83641 00:17:18.351 06:56:22 -- common/autotest_common.sh@941 -- # uname 00:17:18.351 06:56:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:18.351 06:56:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83641 00:17:18.351 killing process with pid 83641 00:17:18.351 Received shutdown signal, test time was about 2.000000 seconds 00:17:18.351 00:17:18.351 Latency(us) 00:17:18.351 [2024-12-13T06:56:22.870Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:18.351 [2024-12-13T06:56:22.870Z] =================================================================================================================== 00:17:18.351 [2024-12-13T06:56:22.870Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:18.351 06:56:22 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:18.351 06:56:22 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:18.351 06:56:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83641' 00:17:18.351 06:56:22 -- common/autotest_common.sh@955 -- # kill 83641 00:17:18.351 06:56:22 -- common/autotest_common.sh@960 -- # wait 83641 00:17:18.609 06:56:22 -- host/digest.sh@126 -- # killprocess 83460 00:17:18.609 06:56:22 -- common/autotest_common.sh@936 -- # '[' -z 83460 ']' 00:17:18.609 06:56:22 -- common/autotest_common.sh@940 -- # kill -0 83460 00:17:18.609 06:56:22 -- common/autotest_common.sh@941 -- # uname 00:17:18.609 06:56:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:18.609 06:56:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83460 00:17:18.609 killing process with pid 83460 00:17:18.609 06:56:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:18.609 06:56:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:18.609 06:56:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83460' 00:17:18.609 06:56:23 -- common/autotest_common.sh@955 -- # kill 83460 00:17:18.609 06:56:23 -- common/autotest_common.sh@960 -- # wait 83460 00:17:18.868 ************************************ 00:17:18.868 END TEST nvmf_digest_clean 00:17:18.868 ************************************ 00:17:18.868 00:17:18.868 real 0m15.260s 00:17:18.868 user 0m29.870s 00:17:18.868 sys 0m4.289s 00:17:18.868 06:56:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:18.868 06:56:23 -- common/autotest_common.sh@10 -- # set +x 00:17:18.868 06:56:23 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:17:18.868 06:56:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:18.868 06:56:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:18.868 06:56:23 -- common/autotest_common.sh@10 -- # set +x 00:17:18.868 ************************************ 00:17:18.868 START TEST nvmf_digest_error 00:17:18.868 ************************************ 00:17:18.868 06:56:23 -- common/autotest_common.sh@1114 -- # run_digest_error 00:17:18.868 06:56:23 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:17:18.868 06:56:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:18.868 06:56:23 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:18.868 06:56:23 -- common/autotest_common.sh@10 -- # set +x 00:17:18.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:18.868 06:56:23 -- nvmf/common.sh@469 -- # nvmfpid=83719 00:17:18.868 06:56:23 -- nvmf/common.sh@470 -- # waitforlisten 83719 00:17:18.868 06:56:23 -- common/autotest_common.sh@829 -- # '[' -z 83719 ']' 00:17:18.868 06:56:23 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:18.868 06:56:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:18.869 06:56:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:18.869 06:56:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:18.869 06:56:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:18.869 06:56:23 -- common/autotest_common.sh@10 -- # set +x 00:17:18.869 [2024-12-13 06:56:23.258373] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:18.869 [2024-12-13 06:56:23.258476] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:19.128 [2024-12-13 06:56:23.390439] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:19.128 [2024-12-13 06:56:23.424640] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:19.128 [2024-12-13 06:56:23.425066] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:19.128 [2024-12-13 06:56:23.425095] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:19.128 [2024-12-13 06:56:23.425107] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:19.128 [2024-12-13 06:56:23.425147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:19.128 06:56:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:19.128 06:56:23 -- common/autotest_common.sh@862 -- # return 0 00:17:19.128 06:56:23 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:19.128 06:56:23 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:19.128 06:56:23 -- common/autotest_common.sh@10 -- # set +x 00:17:19.128 06:56:23 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:19.128 06:56:23 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:17:19.128 06:56:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.128 06:56:23 -- common/autotest_common.sh@10 -- # set +x 00:17:19.128 [2024-12-13 06:56:23.521532] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:17:19.128 06:56:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.128 06:56:23 -- host/digest.sh@104 -- # common_target_config 00:17:19.128 06:56:23 -- host/digest.sh@43 -- # rpc_cmd 00:17:19.128 06:56:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.128 06:56:23 -- common/autotest_common.sh@10 -- # set +x 00:17:19.128 null0 00:17:19.128 [2024-12-13 06:56:23.588131] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:19.128 [2024-12-13 06:56:23.612476] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:19.128 06:56:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.128 06:56:23 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:17:19.128 06:56:23 -- host/digest.sh@54 -- # local rw bs qd 00:17:19.128 06:56:23 -- host/digest.sh@56 -- # rw=randread 00:17:19.128 06:56:23 -- host/digest.sh@56 -- # bs=4096 00:17:19.128 06:56:23 -- host/digest.sh@56 -- # qd=128 00:17:19.128 06:56:23 -- host/digest.sh@58 -- # bperfpid=83738 00:17:19.128 06:56:23 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:17:19.128 06:56:23 -- host/digest.sh@60 -- # waitforlisten 83738 /var/tmp/bperf.sock 00:17:19.128 06:56:23 -- common/autotest_common.sh@829 -- # '[' -z 83738 ']' 00:17:19.128 06:56:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:19.128 06:56:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:19.128 06:56:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:19.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:19.128 06:56:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:19.128 06:56:23 -- common/autotest_common.sh@10 -- # set +x 00:17:19.387 [2024-12-13 06:56:23.659069] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:19.387 [2024-12-13 06:56:23.659336] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83738 ] 00:17:19.387 [2024-12-13 06:56:23.793682] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:19.387 [2024-12-13 06:56:23.829696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:20.325 06:56:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:20.325 06:56:24 -- common/autotest_common.sh@862 -- # return 0 00:17:20.325 06:56:24 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:20.325 06:56:24 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:20.325 06:56:24 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:20.325 06:56:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.325 06:56:24 -- common/autotest_common.sh@10 -- # set +x 00:17:20.585 06:56:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.585 06:56:24 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:20.585 06:56:24 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:20.844 nvme0n1 00:17:20.844 06:56:25 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:17:20.844 06:56:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.844 06:56:25 -- common/autotest_common.sh@10 -- # set +x 00:17:20.844 06:56:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.844 06:56:25 -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:20.844 06:56:25 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:20.844 Running I/O for 2 seconds... 00:17:20.844 [2024-12-13 06:56:25.280474] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:20.844 [2024-12-13 06:56:25.280524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.844 [2024-12-13 06:56:25.280556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.844 [2024-12-13 06:56:25.297260] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:20.844 [2024-12-13 06:56:25.297297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.844 [2024-12-13 06:56:25.297327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.844 [2024-12-13 06:56:25.313405] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:20.844 [2024-12-13 06:56:25.313441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:39 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.844 [2024-12-13 06:56:25.313470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.844 [2024-12-13 06:56:25.328730] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:20.844 [2024-12-13 06:56:25.328923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.844 [2024-12-13 06:56:25.328958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.844 [2024-12-13 06:56:25.344014] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:20.844 [2024-12-13 06:56:25.344220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.844 [2024-12-13 06:56:25.344269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.844 [2024-12-13 06:56:25.360067] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:20.844 [2024-12-13 06:56:25.360109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.844 [2024-12-13 06:56:25.360140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.103 [2024-12-13 06:56:25.377365] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:21.103 [2024-12-13 06:56:25.377401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.103 [2024-12-13 06:56:25.377431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.103 [2024-12-13 06:56:25.394379] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:21.103 [2024-12-13 06:56:25.394699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.103 [2024-12-13 06:56:25.394721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.103 [2024-12-13 06:56:25.410519] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:21.103 [2024-12-13 06:56:25.410585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.103 [2024-12-13 06:56:25.410616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.103 [2024-12-13 06:56:25.426729] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:21.103 [2024-12-13 06:56:25.426840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:23712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.103 [2024-12-13 06:56:25.426855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.103 [2024-12-13 06:56:25.444046] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:21.103 [2024-12-13 06:56:25.444114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:24322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.103 [2024-12-13 06:56:25.444146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.103 [2024-12-13 06:56:25.460887] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:21.103 [2024-12-13 06:56:25.460970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:21030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.103 [2024-12-13 06:56:25.460985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.103 [2024-12-13 06:56:25.476300] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:21.104 [2024-12-13 06:56:25.476336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:14289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.104 [2024-12-13 06:56:25.476378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.104 [2024-12-13 06:56:25.490889] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:21.104 [2024-12-13 06:56:25.490923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:6794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.104 [2024-12-13 06:56:25.490951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.104 [2024-12-13 06:56:25.505459] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:21.104 [2024-12-13 06:56:25.505493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.104 [2024-12-13 06:56:25.505521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.104 [2024-12-13 06:56:25.519820] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:21.104 [2024-12-13 06:56:25.520066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:19263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.104 [2024-12-13 06:56:25.520086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.104 [2024-12-13 06:56:25.534728] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:21.104 [2024-12-13 06:56:25.534929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:6261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.104 [2024-12-13 06:56:25.534962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.104 [2024-12-13 06:56:25.549594] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:21.104 [2024-12-13 06:56:25.549779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:18633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.104 [2024-12-13 06:56:25.549813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.104 [2024-12-13 06:56:25.564323] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:21.104 [2024-12-13 06:56:25.564524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:13168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.104 [2024-12-13 06:56:25.564557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.104 [2024-12-13 06:56:25.581089] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:21.104 [2024-12-13 06:56:25.581125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.104 [2024-12-13 06:56:25.581153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.104 [2024-12-13 06:56:25.595685] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:21.104 [2024-12-13 06:56:25.595917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:19780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.104 [2024-12-13 06:56:25.595935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.104 [2024-12-13 06:56:25.610725] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:21.104 [2024-12-13 06:56:25.610912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.104 [2024-12-13 06:56:25.610944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.363 [2024-12-13 06:56:25.626811] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:21.363 [2024-12-13 06:56:25.626864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:17756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.363 [2024-12-13 06:56:25.626892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.363 [2024-12-13 06:56:25.641932] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:21.363 [2024-12-13 06:56:25.641966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:20195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.363 [2024-12-13 06:56:25.641994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.363 [2024-12-13 06:56:25.656663] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:21.363 [2024-12-13 06:56:25.656697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.363 [2024-12-13 06:56:25.656725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.363 [2024-12-13 06:56:25.671211] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:21.363 [2024-12-13 06:56:25.671246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:13759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.363 [2024-12-13 06:56:25.671273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.363 [2024-12-13 06:56:25.685897] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:21.363 [2024-12-13 06:56:25.685931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:13104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.363 [2024-12-13 06:56:25.685959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.363 [2024-12-13 06:56:25.700478] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:21.364 [2024-12-13 06:56:25.700511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:13174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.364 [2024-12-13 06:56:25.700539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.364 [2024-12-13 06:56:25.714925] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:21.364 [2024-12-13 06:56:25.714959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:9134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.364 [2024-12-13 06:56:25.714987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.364 [2024-12-13 06:56:25.729422] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:21.364 [2024-12-13 06:56:25.729595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:16431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.364 [2024-12-13 06:56:25.729628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.364 [2024-12-13 06:56:25.744192] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:21.364 [2024-12-13 06:56:25.744421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:4314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.364 [2024-12-13 06:56:25.744456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.364 [2024-12-13 06:56:25.758882] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:21.364 [2024-12-13 06:56:25.759069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:7732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.364 [2024-12-13 06:56:25.759101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.364 [2024-12-13 06:56:25.773715] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:21.364 [2024-12-13 06:56:25.773901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:3241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.364 [2024-12-13 06:56:25.773934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.364 [2024-12-13 06:56:25.788479] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:21.364 [2024-12-13 06:56:25.788515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:1633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.364 [2024-12-13 06:56:25.788543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.364 [2024-12-13 06:56:25.802790] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:21.364 [2024-12-13 06:56:25.802823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:1787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.364 [2024-12-13 06:56:25.802851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.364 [2024-12-13 06:56:25.817168] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:21.364 [2024-12-13 06:56:25.817202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:19351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.364 [2024-12-13 06:56:25.817230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.364 [2024-12-13 06:56:25.831566] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:21.364 [2024-12-13 06:56:25.831768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:12428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.364 [2024-12-13 06:56:25.831802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.364 [2024-12-13 06:56:25.846315] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:21.364 [2024-12-13 06:56:25.846377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:4392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.364 [2024-12-13 06:56:25.846391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.364 [2024-12-13 06:56:25.860748] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:21.364 [2024-12-13 06:56:25.860784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:8418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.364 [2024-12-13 06:56:25.860812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.364 [2024-12-13 06:56:25.875346] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:21.364 [2024-12-13 06:56:25.875426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:11340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.364 [2024-12-13 06:56:25.875441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.623 [2024-12-13 06:56:25.892114] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:21.623 [2024-12-13 06:56:25.892154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:1854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.624 [2024-12-13 06:56:25.892168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.624 [2024-12-13 06:56:25.907824] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:21.624 [2024-12-13 06:56:25.907881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:14473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.624 [2024-12-13 06:56:25.907895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.624 [2024-12-13 06:56:25.922841] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:21.624 [2024-12-13 06:56:25.922876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:10895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.624 [2024-12-13 06:56:25.922889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.624 [2024-12-13 06:56:25.937801] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:21.624 [2024-12-13 06:56:25.937835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:21294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.624 [2024-12-13 06:56:25.937848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.624 [2024-12-13 06:56:25.952732] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:21.624 [2024-12-13 06:56:25.952783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:22616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.624 [2024-12-13 06:56:25.952795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.624 [2024-12-13 06:56:25.967526] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:21.624 [2024-12-13 06:56:25.967720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:1462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.624 [2024-12-13 06:56:25.967737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.624 [2024-12-13 06:56:25.982777] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:21.624 [2024-12-13 06:56:25.982965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:16048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.624 [2024-12-13 06:56:25.982982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.624 [2024-12-13 06:56:25.997868] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:21.624 [2024-12-13 06:56:25.998054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:1863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.624 [2024-12-13 06:56:25.998071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.624 [2024-12-13 06:56:26.013026] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:21.624 [2024-12-13 06:56:26.013221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:7457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.624 [2024-12-13 06:56:26.013369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.624 [2024-12-13 06:56:26.028292] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:21.624 [2024-12-13 06:56:26.028518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:16660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.624 [2024-12-13 06:56:26.028712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.624 [2024-12-13 06:56:26.043833] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:21.624 [2024-12-13 06:56:26.044080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:12454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.624 [2024-12-13 06:56:26.044303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.624 [2024-12-13 06:56:26.060326] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:21.624 [2024-12-13 06:56:26.060566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:9686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.624 [2024-12-13 06:56:26.060773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.624 [2024-12-13 06:56:26.076762] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:21.624 [2024-12-13 06:56:26.076975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:7824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.624 [2024-12-13 06:56:26.077102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.624 [2024-12-13 06:56:26.093632] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:21.624 [2024-12-13 06:56:26.093841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:8246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.624 [2024-12-13 06:56:26.093976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.624 [2024-12-13 06:56:26.111934] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:21.624 [2024-12-13 06:56:26.112143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:13952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.624 [2024-12-13 06:56:26.112303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.624 [2024-12-13 06:56:26.128825] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:21.624 [2024-12-13 06:56:26.129056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:2115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.624 [2024-12-13 06:56:26.129164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.884 [2024-12-13 06:56:26.146262] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:21.884 [2024-12-13 06:56:26.146488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:19277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.884 [2024-12-13 06:56:26.146688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.884 [2024-12-13 06:56:26.164594] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:21.884 [2024-12-13 06:56:26.164817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:8158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.884 [2024-12-13 06:56:26.164967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.884 [2024-12-13 06:56:26.182603] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:21.884 [2024-12-13 06:56:26.182826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:7933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.884 [2024-12-13 06:56:26.183021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.884 [2024-12-13 06:56:26.200895] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:21.884 [2024-12-13 06:56:26.201110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:13867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.884 [2024-12-13 06:56:26.201243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.884 [2024-12-13 06:56:26.217026] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:21.884 [2024-12-13 06:56:26.217237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:18864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.884 [2024-12-13 06:56:26.217466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.884 [2024-12-13 06:56:26.233092] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:21.884 [2024-12-13 06:56:26.233303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:10753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.884 [2024-12-13 06:56:26.233527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.884 [2024-12-13 06:56:26.249431] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:21.884 [2024-12-13 06:56:26.249643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:12174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.884 [2024-12-13 06:56:26.249792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.884 [2024-12-13 06:56:26.271211] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:21.884 [2024-12-13 06:56:26.271455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:9050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.884 [2024-12-13 06:56:26.271581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.884 [2024-12-13 06:56:26.286789] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:21.884 [2024-12-13 06:56:26.287081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:8639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.884 [2024-12-13 06:56:26.287232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.884 [2024-12-13 06:56:26.303178] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:21.884 [2024-12-13 06:56:26.303395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:1986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.885 [2024-12-13 06:56:26.303502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.885 [2024-12-13 06:56:26.319221] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:21.885 [2024-12-13 06:56:26.319446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:2642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.885 [2024-12-13 06:56:26.319577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.885 [2024-12-13 06:56:26.335008] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:21.885 [2024-12-13 06:56:26.335212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:19915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.885 [2024-12-13 06:56:26.335364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.885 [2024-12-13 06:56:26.350813] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:21.885 [2024-12-13 06:56:26.350991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:16178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.885 [2024-12-13 06:56:26.351009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.885 [2024-12-13 06:56:26.365992] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:21.885 [2024-12-13 06:56:26.366028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:19317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.885 [2024-12-13 06:56:26.366040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.885 [2024-12-13 06:56:26.380990] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:21.885 [2024-12-13 06:56:26.381026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:22551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.885 [2024-12-13 06:56:26.381038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.885 [2024-12-13 06:56:26.396014] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:21.885 [2024-12-13 06:56:26.396052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:17321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.885 [2024-12-13 06:56:26.396065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:22.145 [2024-12-13 06:56:26.412299] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:22.145 [2024-12-13 06:56:26.412334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:7025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.145 [2024-12-13 06:56:26.412346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:22.145 [2024-12-13 06:56:26.427261] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:22.145 [2024-12-13 06:56:26.427296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:11934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.145 [2024-12-13 06:56:26.427308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:22.145 [2024-12-13 06:56:26.442264] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:22.145 [2024-12-13 06:56:26.442300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:10846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.145 [2024-12-13 06:56:26.442312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:22.145 [2024-12-13 06:56:26.460275] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:22.145 [2024-12-13 06:56:26.460311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:8528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.145 [2024-12-13 06:56:26.460324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:22.145 [2024-12-13 06:56:26.477432] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:22.145 [2024-12-13 06:56:26.477655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:11156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.145 [2024-12-13 06:56:26.477691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:22.145 [2024-12-13 06:56:26.494183] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:22.145 [2024-12-13 06:56:26.494223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:16366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.145 [2024-12-13 06:56:26.494236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:22.145 [2024-12-13 06:56:26.510787] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:22.145 [2024-12-13 06:56:26.510969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:18774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.145 [2024-12-13 06:56:26.511004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:22.145 [2024-12-13 06:56:26.526397] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:22.145 [2024-12-13 06:56:26.526433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:8012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.145 [2024-12-13 06:56:26.526463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:22.145 [2024-12-13 06:56:26.541966] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:22.145 [2024-12-13 06:56:26.542144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:9825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.145 [2024-12-13 06:56:26.542177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:22.145 [2024-12-13 06:56:26.559978] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:22.145 [2024-12-13 06:56:26.560015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:5451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.145 [2024-12-13 06:56:26.560044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:22.145 [2024-12-13 06:56:26.575248] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:22.145 [2024-12-13 06:56:26.575434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:5598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.145 [2024-12-13 06:56:26.575468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:22.145 [2024-12-13 06:56:26.590717] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:22.145 [2024-12-13 06:56:26.590912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:22614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.145 [2024-12-13 06:56:26.591121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:22.145 [2024-12-13 06:56:26.606882] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:22.145 [2024-12-13 06:56:26.607073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:21073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.145 [2024-12-13 06:56:26.607223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:22.145 [2024-12-13 06:56:26.622409] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:22.145 [2024-12-13 06:56:26.622601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:16840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.145 [2024-12-13 06:56:26.622741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:22.145 [2024-12-13 06:56:26.638084] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:22.145 [2024-12-13 06:56:26.638275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:12166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.145 [2024-12-13 06:56:26.638449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:22.145 [2024-12-13 06:56:26.653814] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:22.145 [2024-12-13 06:56:26.654019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:16221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.145 [2024-12-13 06:56:26.654173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:22.405 [2024-12-13 06:56:26.670432] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:22.405 [2024-12-13 06:56:26.670622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:24458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.406 [2024-12-13 06:56:26.670764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:22.406 [2024-12-13 06:56:26.686067] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:22.406 [2024-12-13 06:56:26.686258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.406 [2024-12-13 06:56:26.686485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:22.406 [2024-12-13 06:56:26.701980] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:22.406 [2024-12-13 06:56:26.702172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:19086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.406 [2024-12-13 06:56:26.702322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:22.406 [2024-12-13 06:56:26.718788] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:22.406 [2024-12-13 06:56:26.718998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:9595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.406 [2024-12-13 06:56:26.719135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:22.406 [2024-12-13 06:56:26.734763] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:22.406 [2024-12-13 06:56:26.735004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:16868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.406 [2024-12-13 06:56:26.735146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:22.406 [2024-12-13 06:56:26.751281] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:22.406 [2024-12-13 06:56:26.751319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:23134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.406 [2024-12-13 06:56:26.751332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:22.406 [2024-12-13 06:56:26.767061] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:22.406 [2024-12-13 06:56:26.767112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:2534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.406 [2024-12-13 06:56:26.767125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:22.406 [2024-12-13 06:56:26.782455] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:22.406 [2024-12-13 06:56:26.782499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:11260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.406 [2024-12-13 06:56:26.782512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:22.406 [2024-12-13 06:56:26.797555] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:22.406 [2024-12-13 06:56:26.797590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.406 [2024-12-13 06:56:26.797603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:22.406 [2024-12-13 06:56:26.812537] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:22.406 [2024-12-13 06:56:26.812572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.406 [2024-12-13 06:56:26.812585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:22.406 [2024-12-13 06:56:26.827310] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:22.406 [2024-12-13 06:56:26.827369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:13611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.406 [2024-12-13 06:56:26.827384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:22.406 [2024-12-13 06:56:26.842319] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:22.406 [2024-12-13 06:56:26.842361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:2650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.406 [2024-12-13 06:56:26.842375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:22.406 [2024-12-13 06:56:26.857220] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:22.406 [2024-12-13 06:56:26.857255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:6091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.406 [2024-12-13 06:56:26.857267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:22.406 [2024-12-13 06:56:26.872354] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:22.406 [2024-12-13 06:56:26.872400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:19981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.406 [2024-12-13 06:56:26.872429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:22.406 [2024-12-13 06:56:26.887341] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:22.406 [2024-12-13 06:56:26.887399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:20143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.406 [2024-12-13 06:56:26.887411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:22.406 [2024-12-13 06:56:26.902234] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:22.406 [2024-12-13 06:56:26.902467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:65 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.406 [2024-12-13 06:56:26.902485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:22.406 [2024-12-13 06:56:26.917411] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:22.406 [2024-12-13 06:56:26.917582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:15223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.406 [2024-12-13 06:56:26.917601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:22.666 [2024-12-13 06:56:26.934460] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:22.666 [2024-12-13 06:56:26.934647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:18908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.666 [2024-12-13 06:56:26.934665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:22.666 [2024-12-13 06:56:26.950869] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:22.666 [2024-12-13 06:56:26.950908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:5488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.666 [2024-12-13 06:56:26.950922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:22.666 [2024-12-13 06:56:26.966525] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:22.666 [2024-12-13 06:56:26.966559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:24461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.666 [2024-12-13 06:56:26.966588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:22.666 [2024-12-13 06:56:26.982709] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:22.666 [2024-12-13 06:56:26.982760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:15490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.666 [2024-12-13 06:56:26.982788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:22.666 [2024-12-13 06:56:26.997805] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:22.666 [2024-12-13 06:56:26.997839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:13361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.666 [2024-12-13 06:56:26.997868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:22.667 [2024-12-13 06:56:27.012872] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:22.667 [2024-12-13 06:56:27.012906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:22303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.667 [2024-12-13 06:56:27.012934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:22.667 [2024-12-13 06:56:27.027898] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:22.667 [2024-12-13 06:56:27.028098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:22090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.667 [2024-12-13 06:56:27.028118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:22.667 [2024-12-13 06:56:27.045566] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:22.667 [2024-12-13 06:56:27.045756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:15156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.667 [2024-12-13 06:56:27.045918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:22.667 [2024-12-13 06:56:27.060928] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:22.667 [2024-12-13 06:56:27.061133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:7278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.667 [2024-12-13 06:56:27.061303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:22.667 [2024-12-13 06:56:27.076689] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:22.667 [2024-12-13 06:56:27.076899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:13849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.667 [2024-12-13 06:56:27.077044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:22.667 [2024-12-13 06:56:27.092049] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:22.667 [2024-12-13 06:56:27.092262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:14514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.667 [2024-12-13 06:56:27.092516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:22.667 [2024-12-13 06:56:27.107963] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:22.667 [2024-12-13 06:56:27.108194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:12164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.667 [2024-12-13 06:56:27.108413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:22.667 [2024-12-13 06:56:27.123571] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:22.667 [2024-12-13 06:56:27.123792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.667 [2024-12-13 06:56:27.123986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:22.667 [2024-12-13 06:56:27.139178] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:22.667 [2024-12-13 06:56:27.139425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:17264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.667 [2024-12-13 06:56:27.139558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:22.667 [2024-12-13 06:56:27.154765] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:22.667 [2024-12-13 06:56:27.154975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.667 [2024-12-13 06:56:27.155150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:22.667 [2024-12-13 06:56:27.170506] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:22.667 [2024-12-13 06:56:27.170680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.667 [2024-12-13 06:56:27.170714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:22.927 [2024-12-13 06:56:27.188073] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:22.927 [2024-12-13 06:56:27.188120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.927 [2024-12-13 06:56:27.188134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:22.927 [2024-12-13 06:56:27.205158] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:22.927 [2024-12-13 06:56:27.205194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.927 [2024-12-13 06:56:27.205222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:22.927 [2024-12-13 06:56:27.221548] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:22.927 [2024-12-13 06:56:27.221599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.927 [2024-12-13 06:56:27.221613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:22.927 [2024-12-13 06:56:27.238174] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:22.927 [2024-12-13 06:56:27.238209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.927 [2024-12-13 06:56:27.238221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:22.927 [2024-12-13 06:56:27.255165] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de10b0) 00:17:22.927 [2024-12-13 06:56:27.255201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.927 [2024-12-13 06:56:27.255214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:22.927 00:17:22.927 Latency(us) 00:17:22.927 [2024-12-13T06:56:27.446Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:22.927 [2024-12-13T06:56:27.446Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:17:22.927 nvme0n1 : 2.01 16076.64 62.80 0.00 0.00 7956.56 6970.65 29312.47 00:17:22.927 [2024-12-13T06:56:27.446Z] =================================================================================================================== 00:17:22.927 [2024-12-13T06:56:27.446Z] Total : 16076.64 62.80 0.00 0.00 7956.56 6970.65 29312.47 00:17:22.927 0 00:17:22.927 06:56:27 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:22.927 06:56:27 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:22.927 06:56:27 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:22.927 | .driver_specific 00:17:22.927 | .nvme_error 00:17:22.927 | .status_code 00:17:22.927 | .command_transient_transport_error' 00:17:22.927 06:56:27 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:23.186 06:56:27 -- host/digest.sh@71 -- # (( 126 > 0 )) 00:17:23.186 06:56:27 -- host/digest.sh@73 -- # killprocess 83738 00:17:23.186 06:56:27 -- common/autotest_common.sh@936 -- # '[' -z 83738 ']' 00:17:23.186 06:56:27 -- common/autotest_common.sh@940 -- # kill -0 83738 00:17:23.186 06:56:27 -- common/autotest_common.sh@941 -- # uname 00:17:23.186 06:56:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:23.186 06:56:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83738 00:17:23.186 killing process with pid 83738 00:17:23.186 Received shutdown signal, test time was about 2.000000 seconds 00:17:23.186 00:17:23.186 Latency(us) 00:17:23.186 [2024-12-13T06:56:27.706Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:23.187 [2024-12-13T06:56:27.706Z] =================================================================================================================== 00:17:23.187 [2024-12-13T06:56:27.706Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:23.187 06:56:27 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:23.187 06:56:27 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:23.187 06:56:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83738' 00:17:23.187 06:56:27 -- common/autotest_common.sh@955 -- # kill 83738 00:17:23.187 06:56:27 -- common/autotest_common.sh@960 -- # wait 83738 00:17:23.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:23.446 06:56:27 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:17:23.446 06:56:27 -- host/digest.sh@54 -- # local rw bs qd 00:17:23.446 06:56:27 -- host/digest.sh@56 -- # rw=randread 00:17:23.446 06:56:27 -- host/digest.sh@56 -- # bs=131072 00:17:23.446 06:56:27 -- host/digest.sh@56 -- # qd=16 00:17:23.446 06:56:27 -- host/digest.sh@58 -- # bperfpid=83798 00:17:23.446 06:56:27 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:17:23.446 06:56:27 -- host/digest.sh@60 -- # waitforlisten 83798 /var/tmp/bperf.sock 00:17:23.446 06:56:27 -- common/autotest_common.sh@829 -- # '[' -z 83798 ']' 00:17:23.446 06:56:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:23.446 06:56:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:23.446 06:56:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:23.446 06:56:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:23.446 06:56:27 -- common/autotest_common.sh@10 -- # set +x 00:17:23.446 [2024-12-13 06:56:27.777498] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:23.446 [2024-12-13 06:56:27.778139] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83798 ] 00:17:23.446 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:23.446 Zero copy mechanism will not be used. 00:17:23.446 [2024-12-13 06:56:27.912615] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:23.446 [2024-12-13 06:56:27.945413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:23.705 06:56:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:23.705 06:56:28 -- common/autotest_common.sh@862 -- # return 0 00:17:23.705 06:56:28 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:23.705 06:56:28 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:23.964 06:56:28 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:23.964 06:56:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.964 06:56:28 -- common/autotest_common.sh@10 -- # set +x 00:17:23.964 06:56:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.964 06:56:28 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:23.964 06:56:28 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:24.223 nvme0n1 00:17:24.223 06:56:28 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:17:24.223 06:56:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.223 06:56:28 -- common/autotest_common.sh@10 -- # set +x 00:17:24.223 06:56:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.223 06:56:28 -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:24.223 06:56:28 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:24.484 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:24.484 Zero copy mechanism will not be used. 00:17:24.484 Running I/O for 2 seconds... 00:17:24.484 [2024-12-13 06:56:28.760233] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.484 [2024-12-13 06:56:28.760298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.484 [2024-12-13 06:56:28.760330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.484 [2024-12-13 06:56:28.764637] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.484 [2024-12-13 06:56:28.764676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.484 [2024-12-13 06:56:28.764707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.484 [2024-12-13 06:56:28.769039] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.484 [2024-12-13 06:56:28.769092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.484 [2024-12-13 06:56:28.769121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.484 [2024-12-13 06:56:28.773495] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.484 [2024-12-13 06:56:28.773532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.484 [2024-12-13 06:56:28.773562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.484 [2024-12-13 06:56:28.777830] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.484 [2024-12-13 06:56:28.777868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.484 [2024-12-13 06:56:28.777898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.484 [2024-12-13 06:56:28.782522] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.484 [2024-12-13 06:56:28.782576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.484 [2024-12-13 06:56:28.782590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.484 [2024-12-13 06:56:28.787475] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.484 [2024-12-13 06:56:28.787571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.484 [2024-12-13 06:56:28.787602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.484 [2024-12-13 06:56:28.791678] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.484 [2024-12-13 06:56:28.791715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.484 [2024-12-13 06:56:28.791744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.484 [2024-12-13 06:56:28.796077] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.484 [2024-12-13 06:56:28.796132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.484 [2024-12-13 06:56:28.796163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.484 [2024-12-13 06:56:28.800221] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.484 [2024-12-13 06:56:28.800272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.485 [2024-12-13 06:56:28.800301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.485 [2024-12-13 06:56:28.804464] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.485 [2024-12-13 06:56:28.804501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.485 [2024-12-13 06:56:28.804530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.485 [2024-12-13 06:56:28.808838] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.485 [2024-12-13 06:56:28.808875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.485 [2024-12-13 06:56:28.808904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.485 [2024-12-13 06:56:28.812876] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.485 [2024-12-13 06:56:28.812912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.485 [2024-12-13 06:56:28.812941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.485 [2024-12-13 06:56:28.816900] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.485 [2024-12-13 06:56:28.816936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.485 [2024-12-13 06:56:28.816965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.485 [2024-12-13 06:56:28.821201] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.485 [2024-12-13 06:56:28.821238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.485 [2024-12-13 06:56:28.821267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.485 [2024-12-13 06:56:28.825298] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.485 [2024-12-13 06:56:28.825335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.485 [2024-12-13 06:56:28.825379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.485 [2024-12-13 06:56:28.829462] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.485 [2024-12-13 06:56:28.829498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.485 [2024-12-13 06:56:28.829527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.485 [2024-12-13 06:56:28.833866] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.485 [2024-12-13 06:56:28.833904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.485 [2024-12-13 06:56:28.833933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.485 [2024-12-13 06:56:28.838079] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.485 [2024-12-13 06:56:28.838116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.485 [2024-12-13 06:56:28.838145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.485 [2024-12-13 06:56:28.842175] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.485 [2024-12-13 06:56:28.842212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.485 [2024-12-13 06:56:28.842240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.485 [2024-12-13 06:56:28.846644] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.485 [2024-12-13 06:56:28.846680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.485 [2024-12-13 06:56:28.846709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.485 [2024-12-13 06:56:28.850701] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.485 [2024-12-13 06:56:28.850738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.485 [2024-12-13 06:56:28.850766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.485 [2024-12-13 06:56:28.854755] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.485 [2024-12-13 06:56:28.854790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.485 [2024-12-13 06:56:28.854819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.485 [2024-12-13 06:56:28.859016] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.485 [2024-12-13 06:56:28.859053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.485 [2024-12-13 06:56:28.859082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.485 [2024-12-13 06:56:28.863209] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.485 [2024-12-13 06:56:28.863246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.485 [2024-12-13 06:56:28.863274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.485 [2024-12-13 06:56:28.867313] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.485 [2024-12-13 06:56:28.867394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.485 [2024-12-13 06:56:28.867409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.485 [2024-12-13 06:56:28.871737] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.485 [2024-12-13 06:56:28.871774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.485 [2024-12-13 06:56:28.871803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.485 [2024-12-13 06:56:28.875893] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.485 [2024-12-13 06:56:28.876060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.485 [2024-12-13 06:56:28.876079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.485 [2024-12-13 06:56:28.880218] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.485 [2024-12-13 06:56:28.880271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.485 [2024-12-13 06:56:28.880300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.485 [2024-12-13 06:56:28.884585] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.485 [2024-12-13 06:56:28.884621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.485 [2024-12-13 06:56:28.884650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.485 [2024-12-13 06:56:28.888527] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.485 [2024-12-13 06:56:28.888565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.485 [2024-12-13 06:56:28.888593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.485 [2024-12-13 06:56:28.892521] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.485 [2024-12-13 06:56:28.892556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.485 [2024-12-13 06:56:28.892584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.485 [2024-12-13 06:56:28.896769] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.485 [2024-12-13 06:56:28.896805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.485 [2024-12-13 06:56:28.896833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.485 [2024-12-13 06:56:28.900942] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.485 [2024-12-13 06:56:28.900993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.485 [2024-12-13 06:56:28.901023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.485 [2024-12-13 06:56:28.905112] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.485 [2024-12-13 06:56:28.905149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.485 [2024-12-13 06:56:28.905163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.485 [2024-12-13 06:56:28.909548] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.486 [2024-12-13 06:56:28.909585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.486 [2024-12-13 06:56:28.909613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.486 [2024-12-13 06:56:28.913692] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.486 [2024-12-13 06:56:28.913743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.486 [2024-12-13 06:56:28.913772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.486 [2024-12-13 06:56:28.917808] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.486 [2024-12-13 06:56:28.917844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.486 [2024-12-13 06:56:28.917872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.486 [2024-12-13 06:56:28.921995] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.486 [2024-12-13 06:56:28.922034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.486 [2024-12-13 06:56:28.922048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.486 [2024-12-13 06:56:28.926576] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.486 [2024-12-13 06:56:28.926613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.486 [2024-12-13 06:56:28.926641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.486 [2024-12-13 06:56:28.930968] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.486 [2024-12-13 06:56:28.931005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.486 [2024-12-13 06:56:28.931034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.486 [2024-12-13 06:56:28.935282] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.486 [2024-12-13 06:56:28.935317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.486 [2024-12-13 06:56:28.935346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.486 [2024-12-13 06:56:28.939519] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.486 [2024-12-13 06:56:28.939554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.486 [2024-12-13 06:56:28.939598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.486 [2024-12-13 06:56:28.943724] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.486 [2024-12-13 06:56:28.943775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.486 [2024-12-13 06:56:28.943819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.486 [2024-12-13 06:56:28.947961] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.486 [2024-12-13 06:56:28.948001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.486 [2024-12-13 06:56:28.948015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.486 [2024-12-13 06:56:28.952126] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.486 [2024-12-13 06:56:28.952194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.486 [2024-12-13 06:56:28.952208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.486 [2024-12-13 06:56:28.956256] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.486 [2024-12-13 06:56:28.956305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.486 [2024-12-13 06:56:28.956333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.486 [2024-12-13 06:56:28.960308] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.486 [2024-12-13 06:56:28.960342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.486 [2024-12-13 06:56:28.960382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.486 [2024-12-13 06:56:28.964220] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.486 [2024-12-13 06:56:28.964270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.486 [2024-12-13 06:56:28.964298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.486 [2024-12-13 06:56:28.968284] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.486 [2024-12-13 06:56:28.968319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.486 [2024-12-13 06:56:28.968347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.486 [2024-12-13 06:56:28.972399] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.486 [2024-12-13 06:56:28.972450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.486 [2024-12-13 06:56:28.972479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.486 [2024-12-13 06:56:28.976417] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.486 [2024-12-13 06:56:28.976451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.486 [2024-12-13 06:56:28.976479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.486 [2024-12-13 06:56:28.980303] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.486 [2024-12-13 06:56:28.980338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.486 [2024-12-13 06:56:28.980396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.486 [2024-12-13 06:56:28.984359] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.486 [2024-12-13 06:56:28.984417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.486 [2024-12-13 06:56:28.984430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.486 [2024-12-13 06:56:28.988404] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.486 [2024-12-13 06:56:28.988462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.486 [2024-12-13 06:56:28.988477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.486 [2024-12-13 06:56:28.992355] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.486 [2024-12-13 06:56:28.992417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.486 [2024-12-13 06:56:28.992447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.486 [2024-12-13 06:56:28.998196] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.486 [2024-12-13 06:56:28.998249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.486 [2024-12-13 06:56:28.998261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.750 [2024-12-13 06:56:29.003466] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.750 [2024-12-13 06:56:29.003500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.750 [2024-12-13 06:56:29.003529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.750 [2024-12-13 06:56:29.007507] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.750 [2024-12-13 06:56:29.007541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.750 [2024-12-13 06:56:29.007570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.750 [2024-12-13 06:56:29.011677] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.750 [2024-12-13 06:56:29.011712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.750 [2024-12-13 06:56:29.011740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.750 [2024-12-13 06:56:29.015730] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.750 [2024-12-13 06:56:29.015765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.750 [2024-12-13 06:56:29.015794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.750 [2024-12-13 06:56:29.019800] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.750 [2024-12-13 06:56:29.019834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.750 [2024-12-13 06:56:29.019887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.750 [2024-12-13 06:56:29.023927] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.750 [2024-12-13 06:56:29.023966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.750 [2024-12-13 06:56:29.023980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.750 [2024-12-13 06:56:29.027979] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.750 [2024-12-13 06:56:29.028017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.750 [2024-12-13 06:56:29.028047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.750 [2024-12-13 06:56:29.032138] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.750 [2024-12-13 06:56:29.032176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.750 [2024-12-13 06:56:29.032221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.750 [2024-12-13 06:56:29.036281] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.750 [2024-12-13 06:56:29.036316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.750 [2024-12-13 06:56:29.036344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.750 [2024-12-13 06:56:29.040462] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.750 [2024-12-13 06:56:29.040497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.750 [2024-12-13 06:56:29.040543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.750 [2024-12-13 06:56:29.045115] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.750 [2024-12-13 06:56:29.045168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.750 [2024-12-13 06:56:29.045214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.750 [2024-12-13 06:56:29.049661] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.750 [2024-12-13 06:56:29.049696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.750 [2024-12-13 06:56:29.049725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.750 [2024-12-13 06:56:29.053699] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.750 [2024-12-13 06:56:29.053735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.750 [2024-12-13 06:56:29.053765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.750 [2024-12-13 06:56:29.057781] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.750 [2024-12-13 06:56:29.057816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.750 [2024-12-13 06:56:29.057845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.750 [2024-12-13 06:56:29.061872] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.750 [2024-12-13 06:56:29.061908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.750 [2024-12-13 06:56:29.061936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.750 [2024-12-13 06:56:29.065939] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.750 [2024-12-13 06:56:29.065975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.750 [2024-12-13 06:56:29.066003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.750 [2024-12-13 06:56:29.070061] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.750 [2024-12-13 06:56:29.070096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.750 [2024-12-13 06:56:29.070125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.750 [2024-12-13 06:56:29.074172] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.750 [2024-12-13 06:56:29.074207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.750 [2024-12-13 06:56:29.074236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.750 [2024-12-13 06:56:29.078257] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.750 [2024-12-13 06:56:29.078292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.750 [2024-12-13 06:56:29.078321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.750 [2024-12-13 06:56:29.082344] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.750 [2024-12-13 06:56:29.082419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.750 [2024-12-13 06:56:29.082434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.750 [2024-12-13 06:56:29.086420] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.750 [2024-12-13 06:56:29.086457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.750 [2024-12-13 06:56:29.086486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.750 [2024-12-13 06:56:29.090549] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.750 [2024-12-13 06:56:29.090584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.751 [2024-12-13 06:56:29.090614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.751 [2024-12-13 06:56:29.094685] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.751 [2024-12-13 06:56:29.094721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.751 [2024-12-13 06:56:29.094765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.751 [2024-12-13 06:56:29.098663] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.751 [2024-12-13 06:56:29.098698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.751 [2024-12-13 06:56:29.098727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.751 [2024-12-13 06:56:29.102749] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.751 [2024-12-13 06:56:29.102799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.751 [2024-12-13 06:56:29.102828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.751 [2024-12-13 06:56:29.106820] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.751 [2024-12-13 06:56:29.106855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.751 [2024-12-13 06:56:29.106884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.751 [2024-12-13 06:56:29.110951] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.751 [2024-12-13 06:56:29.110986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.751 [2024-12-13 06:56:29.111015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.751 [2024-12-13 06:56:29.115004] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.751 [2024-12-13 06:56:29.115038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.751 [2024-12-13 06:56:29.115067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.751 [2024-12-13 06:56:29.119094] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.751 [2024-12-13 06:56:29.119131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.751 [2024-12-13 06:56:29.119159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.751 [2024-12-13 06:56:29.123037] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.751 [2024-12-13 06:56:29.123073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.751 [2024-12-13 06:56:29.123102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.751 [2024-12-13 06:56:29.127442] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.751 [2024-12-13 06:56:29.127507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.751 [2024-12-13 06:56:29.127522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.751 [2024-12-13 06:56:29.131698] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.751 [2024-12-13 06:56:29.131752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.751 [2024-12-13 06:56:29.131781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.751 [2024-12-13 06:56:29.135980] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.751 [2024-12-13 06:56:29.136021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.751 [2024-12-13 06:56:29.136035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.751 [2024-12-13 06:56:29.140246] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.751 [2024-12-13 06:56:29.140295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.751 [2024-12-13 06:56:29.140324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.751 [2024-12-13 06:56:29.144815] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.751 [2024-12-13 06:56:29.144853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.751 [2024-12-13 06:56:29.144883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.751 [2024-12-13 06:56:29.149026] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.751 [2024-12-13 06:56:29.149063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.751 [2024-12-13 06:56:29.149092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.751 [2024-12-13 06:56:29.153148] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.751 [2024-12-13 06:56:29.153185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.751 [2024-12-13 06:56:29.153213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.751 [2024-12-13 06:56:29.157292] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.751 [2024-12-13 06:56:29.157327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.751 [2024-12-13 06:56:29.157356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.751 [2024-12-13 06:56:29.161382] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.751 [2024-12-13 06:56:29.161416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.751 [2024-12-13 06:56:29.161445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.751 [2024-12-13 06:56:29.165322] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.751 [2024-12-13 06:56:29.165381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.751 [2024-12-13 06:56:29.165395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.751 [2024-12-13 06:56:29.169270] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.751 [2024-12-13 06:56:29.169478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.751 [2024-12-13 06:56:29.169511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.751 [2024-12-13 06:56:29.173552] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.751 [2024-12-13 06:56:29.173588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.751 [2024-12-13 06:56:29.173616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.751 [2024-12-13 06:56:29.177549] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.751 [2024-12-13 06:56:29.177586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.751 [2024-12-13 06:56:29.177614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.751 [2024-12-13 06:56:29.181538] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.751 [2024-12-13 06:56:29.181573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.751 [2024-12-13 06:56:29.181602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.751 [2024-12-13 06:56:29.185510] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.751 [2024-12-13 06:56:29.185544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.751 [2024-12-13 06:56:29.185573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.751 [2024-12-13 06:56:29.189517] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.751 [2024-12-13 06:56:29.189552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.751 [2024-12-13 06:56:29.189581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.751 [2024-12-13 06:56:29.193447] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.751 [2024-12-13 06:56:29.193481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.751 [2024-12-13 06:56:29.193509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.751 [2024-12-13 06:56:29.197485] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.751 [2024-12-13 06:56:29.197535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.751 [2024-12-13 06:56:29.197565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.752 [2024-12-13 06:56:29.201568] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.752 [2024-12-13 06:56:29.201603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.752 [2024-12-13 06:56:29.201631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.752 [2024-12-13 06:56:29.205645] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.752 [2024-12-13 06:56:29.205680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.752 [2024-12-13 06:56:29.205708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.752 [2024-12-13 06:56:29.210021] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.752 [2024-12-13 06:56:29.210058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.752 [2024-12-13 06:56:29.210086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.752 [2024-12-13 06:56:29.214533] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.752 [2024-12-13 06:56:29.214570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.752 [2024-12-13 06:56:29.214598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.752 [2024-12-13 06:56:29.219010] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.752 [2024-12-13 06:56:29.219048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.752 [2024-12-13 06:56:29.219078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.752 [2024-12-13 06:56:29.223551] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.752 [2024-12-13 06:56:29.223591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.752 [2024-12-13 06:56:29.223620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.752 [2024-12-13 06:56:29.228339] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.752 [2024-12-13 06:56:29.228413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.752 [2024-12-13 06:56:29.228429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.752 [2024-12-13 06:56:29.232831] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.752 [2024-12-13 06:56:29.233024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.752 [2024-12-13 06:56:29.233058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.752 [2024-12-13 06:56:29.237524] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.752 [2024-12-13 06:56:29.237564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.752 [2024-12-13 06:56:29.237595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.752 [2024-12-13 06:56:29.241942] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.752 [2024-12-13 06:56:29.241979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.752 [2024-12-13 06:56:29.242007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.752 [2024-12-13 06:56:29.246283] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.752 [2024-12-13 06:56:29.246319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.752 [2024-12-13 06:56:29.246347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.752 [2024-12-13 06:56:29.250633] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.752 [2024-12-13 06:56:29.250672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.752 [2024-12-13 06:56:29.250701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.752 [2024-12-13 06:56:29.254708] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.752 [2024-12-13 06:56:29.254744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.752 [2024-12-13 06:56:29.254787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.752 [2024-12-13 06:56:29.258720] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.752 [2024-12-13 06:56:29.258770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.752 [2024-12-13 06:56:29.258798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.752 [2024-12-13 06:56:29.263019] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:24.752 [2024-12-13 06:56:29.263056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.752 [2024-12-13 06:56:29.263084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.025 [2024-12-13 06:56:29.267744] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.025 [2024-12-13 06:56:29.267785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.025 [2024-12-13 06:56:29.267799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.025 [2024-12-13 06:56:29.272284] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.025 [2024-12-13 06:56:29.272463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.025 [2024-12-13 06:56:29.272481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.025 [2024-12-13 06:56:29.277090] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.025 [2024-12-13 06:56:29.277130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.025 [2024-12-13 06:56:29.277160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.025 [2024-12-13 06:56:29.281857] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.025 [2024-12-13 06:56:29.281912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.025 [2024-12-13 06:56:29.281942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.025 [2024-12-13 06:56:29.286151] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.025 [2024-12-13 06:56:29.286188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.026 [2024-12-13 06:56:29.286217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.026 [2024-12-13 06:56:29.290336] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.026 [2024-12-13 06:56:29.290399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.026 [2024-12-13 06:56:29.290427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.026 [2024-12-13 06:56:29.294536] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.026 [2024-12-13 06:56:29.294573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.026 [2024-12-13 06:56:29.294602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.026 [2024-12-13 06:56:29.298637] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.026 [2024-12-13 06:56:29.298673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.026 [2024-12-13 06:56:29.298702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.026 [2024-12-13 06:56:29.303400] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.026 [2024-12-13 06:56:29.303471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.026 [2024-12-13 06:56:29.303487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.026 [2024-12-13 06:56:29.308234] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.026 [2024-12-13 06:56:29.308476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.026 [2024-12-13 06:56:29.308494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.026 [2024-12-13 06:56:29.313473] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.026 [2024-12-13 06:56:29.313542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.026 [2024-12-13 06:56:29.313574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.026 [2024-12-13 06:56:29.318296] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.026 [2024-12-13 06:56:29.318375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.026 [2024-12-13 06:56:29.318391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.026 [2024-12-13 06:56:29.323359] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.026 [2024-12-13 06:56:29.323426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.026 [2024-12-13 06:56:29.323458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.026 [2024-12-13 06:56:29.328171] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.026 [2024-12-13 06:56:29.328411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.026 [2024-12-13 06:56:29.328432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.026 [2024-12-13 06:56:29.332759] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.026 [2024-12-13 06:56:29.332797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.026 [2024-12-13 06:56:29.332827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.026 [2024-12-13 06:56:29.336945] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.026 [2024-12-13 06:56:29.336982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.026 [2024-12-13 06:56:29.337012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.026 [2024-12-13 06:56:29.341504] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.026 [2024-12-13 06:56:29.341726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.026 [2024-12-13 06:56:29.341756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.026 [2024-12-13 06:56:29.346013] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.026 [2024-12-13 06:56:29.346051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.026 [2024-12-13 06:56:29.346082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.026 [2024-12-13 06:56:29.350197] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.026 [2024-12-13 06:56:29.350233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.026 [2024-12-13 06:56:29.350261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.026 [2024-12-13 06:56:29.354485] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.026 [2024-12-13 06:56:29.354521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.026 [2024-12-13 06:56:29.354549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.026 [2024-12-13 06:56:29.358658] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.026 [2024-12-13 06:56:29.358697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.026 [2024-12-13 06:56:29.358742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.026 [2024-12-13 06:56:29.362977] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.026 [2024-12-13 06:56:29.363014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.026 [2024-12-13 06:56:29.363043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.026 [2024-12-13 06:56:29.366991] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.026 [2024-12-13 06:56:29.367027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.026 [2024-12-13 06:56:29.367056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.026 [2024-12-13 06:56:29.371015] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.026 [2024-12-13 06:56:29.371051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.026 [2024-12-13 06:56:29.371080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.026 [2024-12-13 06:56:29.375039] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.026 [2024-12-13 06:56:29.375075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.026 [2024-12-13 06:56:29.375104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.026 [2024-12-13 06:56:29.379060] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.026 [2024-12-13 06:56:29.379096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.026 [2024-12-13 06:56:29.379125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.026 [2024-12-13 06:56:29.383082] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.026 [2024-12-13 06:56:29.383117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.026 [2024-12-13 06:56:29.383161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.026 [2024-12-13 06:56:29.387110] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.026 [2024-12-13 06:56:29.387159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.026 [2024-12-13 06:56:29.387188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.026 [2024-12-13 06:56:29.391255] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.026 [2024-12-13 06:56:29.391291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.026 [2024-12-13 06:56:29.391319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.026 [2024-12-13 06:56:29.395275] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.026 [2024-12-13 06:56:29.395311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.026 [2024-12-13 06:56:29.395339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.026 [2024-12-13 06:56:29.399313] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.026 [2024-12-13 06:56:29.399378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.027 [2024-12-13 06:56:29.399407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.027 [2024-12-13 06:56:29.403350] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.027 [2024-12-13 06:56:29.403413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.027 [2024-12-13 06:56:29.403443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.027 [2024-12-13 06:56:29.407470] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.027 [2024-12-13 06:56:29.407505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.027 [2024-12-13 06:56:29.407534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.027 [2024-12-13 06:56:29.411455] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.027 [2024-12-13 06:56:29.411491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.027 [2024-12-13 06:56:29.411520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.027 [2024-12-13 06:56:29.415454] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.027 [2024-12-13 06:56:29.415489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.027 [2024-12-13 06:56:29.415517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.027 [2024-12-13 06:56:29.419390] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.027 [2024-12-13 06:56:29.419614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.027 [2024-12-13 06:56:29.419648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.027 [2024-12-13 06:56:29.423737] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.027 [2024-12-13 06:56:29.423773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.027 [2024-12-13 06:56:29.423802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.027 [2024-12-13 06:56:29.427914] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.027 [2024-12-13 06:56:29.427954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.027 [2024-12-13 06:56:29.427969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.027 [2024-12-13 06:56:29.432044] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.027 [2024-12-13 06:56:29.432083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.027 [2024-12-13 06:56:29.432113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.027 [2024-12-13 06:56:29.436191] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.027 [2024-12-13 06:56:29.436233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.027 [2024-12-13 06:56:29.436247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.027 [2024-12-13 06:56:29.440368] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.027 [2024-12-13 06:56:29.440448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.027 [2024-12-13 06:56:29.440463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.027 [2024-12-13 06:56:29.444506] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.027 [2024-12-13 06:56:29.444541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.027 [2024-12-13 06:56:29.444570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.027 [2024-12-13 06:56:29.448510] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.027 [2024-12-13 06:56:29.448545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.027 [2024-12-13 06:56:29.448573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.027 [2024-12-13 06:56:29.452528] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.027 [2024-12-13 06:56:29.452564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.027 [2024-12-13 06:56:29.452577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.027 [2024-12-13 06:56:29.456524] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.027 [2024-12-13 06:56:29.456558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.027 [2024-12-13 06:56:29.456586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.027 [2024-12-13 06:56:29.460477] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.027 [2024-12-13 06:56:29.460511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.027 [2024-12-13 06:56:29.460539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.027 [2024-12-13 06:56:29.464502] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.027 [2024-12-13 06:56:29.464536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.027 [2024-12-13 06:56:29.464565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.027 [2024-12-13 06:56:29.468516] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.027 [2024-12-13 06:56:29.468550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.027 [2024-12-13 06:56:29.468578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.027 [2024-12-13 06:56:29.472574] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.027 [2024-12-13 06:56:29.472608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.027 [2024-12-13 06:56:29.472636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.027 [2024-12-13 06:56:29.476572] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.027 [2024-12-13 06:56:29.476607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.027 [2024-12-13 06:56:29.476635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.027 [2024-12-13 06:56:29.480528] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.027 [2024-12-13 06:56:29.480563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.027 [2024-12-13 06:56:29.480591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.027 [2024-12-13 06:56:29.485073] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.027 [2024-12-13 06:56:29.485108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.027 [2024-12-13 06:56:29.485136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.027 [2024-12-13 06:56:29.490437] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.027 [2024-12-13 06:56:29.490470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.027 [2024-12-13 06:56:29.490498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.027 [2024-12-13 06:56:29.494647] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.027 [2024-12-13 06:56:29.494682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.027 [2024-12-13 06:56:29.494710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.027 [2024-12-13 06:56:29.498571] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.027 [2024-12-13 06:56:29.498606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.027 [2024-12-13 06:56:29.498634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.027 [2024-12-13 06:56:29.502488] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.027 [2024-12-13 06:56:29.502522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.027 [2024-12-13 06:56:29.502551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.027 [2024-12-13 06:56:29.506467] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.027 [2024-12-13 06:56:29.506501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.027 [2024-12-13 06:56:29.506530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.028 [2024-12-13 06:56:29.510817] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.028 [2024-12-13 06:56:29.510856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.028 [2024-12-13 06:56:29.510886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.028 [2024-12-13 06:56:29.514978] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.028 [2024-12-13 06:56:29.515016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.028 [2024-12-13 06:56:29.515045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.028 [2024-12-13 06:56:29.519446] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.028 [2024-12-13 06:56:29.519482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.028 [2024-12-13 06:56:29.519511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.028 [2024-12-13 06:56:29.524005] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.028 [2024-12-13 06:56:29.524045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.028 [2024-12-13 06:56:29.524059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.028 [2024-12-13 06:56:29.528511] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.028 [2024-12-13 06:56:29.528545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.028 [2024-12-13 06:56:29.528573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.028 [2024-12-13 06:56:29.532835] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.028 [2024-12-13 06:56:29.532873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.028 [2024-12-13 06:56:29.532902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.028 [2024-12-13 06:56:29.537313] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.028 [2024-12-13 06:56:29.537393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.028 [2024-12-13 06:56:29.537428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.289 [2024-12-13 06:56:29.542051] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.289 [2024-12-13 06:56:29.542253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.289 [2024-12-13 06:56:29.542287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.289 [2024-12-13 06:56:29.546973] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.289 [2024-12-13 06:56:29.547015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.289 [2024-12-13 06:56:29.547045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.289 [2024-12-13 06:56:29.551652] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.289 [2024-12-13 06:56:29.551691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.289 [2024-12-13 06:56:29.551721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.289 [2024-12-13 06:56:29.556540] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.289 [2024-12-13 06:56:29.556582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.289 [2024-12-13 06:56:29.556596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.289 [2024-12-13 06:56:29.561062] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.289 [2024-12-13 06:56:29.561099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.289 [2024-12-13 06:56:29.561128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.289 [2024-12-13 06:56:29.565538] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.289 [2024-12-13 06:56:29.565578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.289 [2024-12-13 06:56:29.565608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.289 [2024-12-13 06:56:29.569954] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.289 [2024-12-13 06:56:29.569990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.289 [2024-12-13 06:56:29.570019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.289 [2024-12-13 06:56:29.574182] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.289 [2024-12-13 06:56:29.574218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.289 [2024-12-13 06:56:29.574247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.289 [2024-12-13 06:56:29.578492] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.289 [2024-12-13 06:56:29.578529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.289 [2024-12-13 06:56:29.578558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.289 [2024-12-13 06:56:29.582634] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.289 [2024-12-13 06:56:29.582671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.289 [2024-12-13 06:56:29.582700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.289 [2024-12-13 06:56:29.586902] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.289 [2024-12-13 06:56:29.586939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.289 [2024-12-13 06:56:29.586969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.289 [2024-12-13 06:56:29.591028] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.289 [2024-12-13 06:56:29.591065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.289 [2024-12-13 06:56:29.591094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.289 [2024-12-13 06:56:29.595229] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.289 [2024-12-13 06:56:29.595266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.289 [2024-12-13 06:56:29.595294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.289 [2024-12-13 06:56:29.599466] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.289 [2024-12-13 06:56:29.599502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.289 [2024-12-13 06:56:29.599531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.289 [2024-12-13 06:56:29.603453] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.289 [2024-12-13 06:56:29.603489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.289 [2024-12-13 06:56:29.603518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.289 [2024-12-13 06:56:29.607486] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.289 [2024-12-13 06:56:29.607522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.289 [2024-12-13 06:56:29.607550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.289 [2024-12-13 06:56:29.611913] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.289 [2024-12-13 06:56:29.611952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.289 [2024-12-13 06:56:29.611967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.289 [2024-12-13 06:56:29.616710] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.289 [2024-12-13 06:56:29.616763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.289 [2024-12-13 06:56:29.616792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.289 [2024-12-13 06:56:29.621201] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.289 [2024-12-13 06:56:29.621238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.289 [2024-12-13 06:56:29.621266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.289 [2024-12-13 06:56:29.625547] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.289 [2024-12-13 06:56:29.625584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.289 [2024-12-13 06:56:29.625612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.289 [2024-12-13 06:56:29.629616] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.289 [2024-12-13 06:56:29.629652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.289 [2024-12-13 06:56:29.629682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.289 [2024-12-13 06:56:29.633736] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.289 [2024-12-13 06:56:29.633772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.289 [2024-12-13 06:56:29.633801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.289 [2024-12-13 06:56:29.638009] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.289 [2024-12-13 06:56:29.638046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.289 [2024-12-13 06:56:29.638075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.289 [2024-12-13 06:56:29.642198] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.289 [2024-12-13 06:56:29.642235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.290 [2024-12-13 06:56:29.642264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.290 [2024-12-13 06:56:29.646416] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.290 [2024-12-13 06:56:29.646467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.290 [2024-12-13 06:56:29.646496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.290 [2024-12-13 06:56:29.650546] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.290 [2024-12-13 06:56:29.650582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.290 [2024-12-13 06:56:29.650610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.290 [2024-12-13 06:56:29.654691] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.290 [2024-12-13 06:56:29.654758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.290 [2024-12-13 06:56:29.654788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.290 [2024-12-13 06:56:29.658835] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.290 [2024-12-13 06:56:29.658873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.290 [2024-12-13 06:56:29.658902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.290 [2024-12-13 06:56:29.663184] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.290 [2024-12-13 06:56:29.663222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.290 [2024-12-13 06:56:29.663251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.290 [2024-12-13 06:56:29.667251] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.290 [2024-12-13 06:56:29.667287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.290 [2024-12-13 06:56:29.667316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.290 [2024-12-13 06:56:29.671305] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.290 [2024-12-13 06:56:29.671343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.290 [2024-12-13 06:56:29.671403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.290 [2024-12-13 06:56:29.675429] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.290 [2024-12-13 06:56:29.675464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.290 [2024-12-13 06:56:29.675492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.290 [2024-12-13 06:56:29.679524] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.290 [2024-12-13 06:56:29.679558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.290 [2024-12-13 06:56:29.679587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.290 [2024-12-13 06:56:29.683534] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.290 [2024-12-13 06:56:29.683570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.290 [2024-12-13 06:56:29.683598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.290 [2024-12-13 06:56:29.687700] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.290 [2024-12-13 06:56:29.687737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.290 [2024-12-13 06:56:29.687766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.290 [2024-12-13 06:56:29.691814] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.290 [2024-12-13 06:56:29.691849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.290 [2024-12-13 06:56:29.691903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.290 [2024-12-13 06:56:29.695927] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.290 [2024-12-13 06:56:29.695966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.290 [2024-12-13 06:56:29.695980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.290 [2024-12-13 06:56:29.700270] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.290 [2024-12-13 06:56:29.700321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.290 [2024-12-13 06:56:29.700349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.290 [2024-12-13 06:56:29.704489] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.290 [2024-12-13 06:56:29.704541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.290 [2024-12-13 06:56:29.704570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.290 [2024-12-13 06:56:29.708564] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.290 [2024-12-13 06:56:29.708616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.290 [2024-12-13 06:56:29.708645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.290 [2024-12-13 06:56:29.712574] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.290 [2024-12-13 06:56:29.712623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.290 [2024-12-13 06:56:29.712652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.290 [2024-12-13 06:56:29.716521] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.290 [2024-12-13 06:56:29.716570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.290 [2024-12-13 06:56:29.716598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.290 [2024-12-13 06:56:29.720527] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.290 [2024-12-13 06:56:29.720579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.290 [2024-12-13 06:56:29.720607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.290 [2024-12-13 06:56:29.724706] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.290 [2024-12-13 06:56:29.724758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.290 [2024-12-13 06:56:29.724786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.290 [2024-12-13 06:56:29.728960] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.290 [2024-12-13 06:56:29.729012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.290 [2024-12-13 06:56:29.729040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.290 [2024-12-13 06:56:29.733151] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.290 [2024-12-13 06:56:29.733201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.290 [2024-12-13 06:56:29.733231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.290 [2024-12-13 06:56:29.737222] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.290 [2024-12-13 06:56:29.737273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.290 [2024-12-13 06:56:29.737301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.290 [2024-12-13 06:56:29.741502] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.290 [2024-12-13 06:56:29.741552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.290 [2024-12-13 06:56:29.741581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.290 [2024-12-13 06:56:29.745642] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.290 [2024-12-13 06:56:29.745692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.290 [2024-12-13 06:56:29.745720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.290 [2024-12-13 06:56:29.749691] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.290 [2024-12-13 06:56:29.749742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.290 [2024-12-13 06:56:29.749770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.290 [2024-12-13 06:56:29.753795] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.291 [2024-12-13 06:56:29.753844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.291 [2024-12-13 06:56:29.753872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.291 [2024-12-13 06:56:29.757826] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.291 [2024-12-13 06:56:29.757876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.291 [2024-12-13 06:56:29.757904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.291 [2024-12-13 06:56:29.761941] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.291 [2024-12-13 06:56:29.761991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.291 [2024-12-13 06:56:29.762020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.291 [2024-12-13 06:56:29.766049] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.291 [2024-12-13 06:56:29.766100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.291 [2024-12-13 06:56:29.766128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.291 [2024-12-13 06:56:29.770268] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.291 [2024-12-13 06:56:29.770330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.291 [2024-12-13 06:56:29.770361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.291 [2024-12-13 06:56:29.774621] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.291 [2024-12-13 06:56:29.774671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.291 [2024-12-13 06:56:29.774699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.291 [2024-12-13 06:56:29.778844] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.291 [2024-12-13 06:56:29.778896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.291 [2024-12-13 06:56:29.778925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.291 [2024-12-13 06:56:29.783084] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.291 [2024-12-13 06:56:29.783135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.291 [2024-12-13 06:56:29.783163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.291 [2024-12-13 06:56:29.787411] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.291 [2024-12-13 06:56:29.787471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.291 [2024-12-13 06:56:29.787499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.291 [2024-12-13 06:56:29.791445] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.291 [2024-12-13 06:56:29.791495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.291 [2024-12-13 06:56:29.791523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.291 [2024-12-13 06:56:29.795368] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.291 [2024-12-13 06:56:29.795417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.291 [2024-12-13 06:56:29.795445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.291 [2024-12-13 06:56:29.799359] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.291 [2024-12-13 06:56:29.799408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.291 [2024-12-13 06:56:29.799436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.291 [2024-12-13 06:56:29.803818] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.291 [2024-12-13 06:56:29.803882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.291 [2024-12-13 06:56:29.803897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.552 [2024-12-13 06:56:29.808415] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.552 [2024-12-13 06:56:29.808475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.552 [2024-12-13 06:56:29.808504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.552 [2024-12-13 06:56:29.812579] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.552 [2024-12-13 06:56:29.812631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.552 [2024-12-13 06:56:29.812659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.552 [2024-12-13 06:56:29.816662] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.552 [2024-12-13 06:56:29.816711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.552 [2024-12-13 06:56:29.816738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.552 [2024-12-13 06:56:29.820609] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.552 [2024-12-13 06:56:29.820657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.552 [2024-12-13 06:56:29.820685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.552 [2024-12-13 06:56:29.824720] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.552 [2024-12-13 06:56:29.824771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.552 [2024-12-13 06:56:29.824799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.552 [2024-12-13 06:56:29.828737] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.552 [2024-12-13 06:56:29.828787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.552 [2024-12-13 06:56:29.828815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.552 [2024-12-13 06:56:29.832754] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.552 [2024-12-13 06:56:29.832803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.552 [2024-12-13 06:56:29.832830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.552 [2024-12-13 06:56:29.836845] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.553 [2024-12-13 06:56:29.836895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.553 [2024-12-13 06:56:29.836923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.553 [2024-12-13 06:56:29.840909] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.553 [2024-12-13 06:56:29.840959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.553 [2024-12-13 06:56:29.840987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.553 [2024-12-13 06:56:29.844917] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.553 [2024-12-13 06:56:29.844968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.553 [2024-12-13 06:56:29.844997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.553 [2024-12-13 06:56:29.848964] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.553 [2024-12-13 06:56:29.849014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.553 [2024-12-13 06:56:29.849042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.553 [2024-12-13 06:56:29.853067] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.553 [2024-12-13 06:56:29.853134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.553 [2024-12-13 06:56:29.853161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.553 [2024-12-13 06:56:29.857044] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.553 [2024-12-13 06:56:29.857094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.553 [2024-12-13 06:56:29.857123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.553 [2024-12-13 06:56:29.861138] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.553 [2024-12-13 06:56:29.861189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.553 [2024-12-13 06:56:29.861216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.553 [2024-12-13 06:56:29.865141] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.553 [2024-12-13 06:56:29.865191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.553 [2024-12-13 06:56:29.865220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.553 [2024-12-13 06:56:29.869187] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.553 [2024-12-13 06:56:29.869237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.553 [2024-12-13 06:56:29.869265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.553 [2024-12-13 06:56:29.873748] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.553 [2024-12-13 06:56:29.873799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.553 [2024-12-13 06:56:29.873827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.553 [2024-12-13 06:56:29.878301] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.553 [2024-12-13 06:56:29.878376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.553 [2024-12-13 06:56:29.878391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.553 [2024-12-13 06:56:29.882302] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.553 [2024-12-13 06:56:29.882376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.553 [2024-12-13 06:56:29.882390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.553 [2024-12-13 06:56:29.886443] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.553 [2024-12-13 06:56:29.886493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.553 [2024-12-13 06:56:29.886520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.553 [2024-12-13 06:56:29.890413] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.553 [2024-12-13 06:56:29.890462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.553 [2024-12-13 06:56:29.890490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.553 [2024-12-13 06:56:29.894480] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.553 [2024-12-13 06:56:29.894528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.553 [2024-12-13 06:56:29.894556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.553 [2024-12-13 06:56:29.898529] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.553 [2024-12-13 06:56:29.898581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.553 [2024-12-13 06:56:29.898609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.553 [2024-12-13 06:56:29.902648] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.553 [2024-12-13 06:56:29.902699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.553 [2024-12-13 06:56:29.902727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.553 [2024-12-13 06:56:29.906674] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.553 [2024-12-13 06:56:29.906724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.553 [2024-12-13 06:56:29.906753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.553 [2024-12-13 06:56:29.910794] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.553 [2024-12-13 06:56:29.910846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.553 [2024-12-13 06:56:29.910875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.553 [2024-12-13 06:56:29.915072] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.553 [2024-12-13 06:56:29.915139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.553 [2024-12-13 06:56:29.915167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.553 [2024-12-13 06:56:29.919656] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.553 [2024-12-13 06:56:29.919708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.553 [2024-12-13 06:56:29.919753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.553 [2024-12-13 06:56:29.924294] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.553 [2024-12-13 06:56:29.924341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.553 [2024-12-13 06:56:29.924380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.553 [2024-12-13 06:56:29.928934] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.553 [2024-12-13 06:56:29.928976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.553 [2024-12-13 06:56:29.928990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.553 [2024-12-13 06:56:29.933582] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.553 [2024-12-13 06:56:29.933634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.553 [2024-12-13 06:56:29.933663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.553 [2024-12-13 06:56:29.938038] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.553 [2024-12-13 06:56:29.938107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.554 [2024-12-13 06:56:29.938136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.554 [2024-12-13 06:56:29.942559] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.554 [2024-12-13 06:56:29.942611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.554 [2024-12-13 06:56:29.942639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.554 [2024-12-13 06:56:29.947070] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.554 [2024-12-13 06:56:29.947126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.554 [2024-12-13 06:56:29.947170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.554 [2024-12-13 06:56:29.951306] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.554 [2024-12-13 06:56:29.951382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.554 [2024-12-13 06:56:29.951397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.554 [2024-12-13 06:56:29.955424] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.554 [2024-12-13 06:56:29.955474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.554 [2024-12-13 06:56:29.955502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.554 [2024-12-13 06:56:29.959474] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.554 [2024-12-13 06:56:29.959525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.554 [2024-12-13 06:56:29.959553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.554 [2024-12-13 06:56:29.963725] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.554 [2024-12-13 06:56:29.963776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.554 [2024-12-13 06:56:29.963805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.554 [2024-12-13 06:56:29.967787] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.554 [2024-12-13 06:56:29.967838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.554 [2024-12-13 06:56:29.967890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.554 [2024-12-13 06:56:29.971934] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.554 [2024-12-13 06:56:29.971973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.554 [2024-12-13 06:56:29.971987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.554 [2024-12-13 06:56:29.977438] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.554 [2024-12-13 06:56:29.977471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.554 [2024-12-13 06:56:29.977499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.554 [2024-12-13 06:56:29.982255] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.554 [2024-12-13 06:56:29.982307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.554 [2024-12-13 06:56:29.982336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.554 [2024-12-13 06:56:29.986559] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.554 [2024-12-13 06:56:29.986625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.554 [2024-12-13 06:56:29.986654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.554 [2024-12-13 06:56:29.990729] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.554 [2024-12-13 06:56:29.990780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.554 [2024-12-13 06:56:29.990808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.554 [2024-12-13 06:56:29.994827] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.554 [2024-12-13 06:56:29.994877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.554 [2024-12-13 06:56:29.994905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.554 [2024-12-13 06:56:29.998954] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.554 [2024-12-13 06:56:29.999008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.554 [2024-12-13 06:56:29.999022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.554 [2024-12-13 06:56:30.003415] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.554 [2024-12-13 06:56:30.003458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.554 [2024-12-13 06:56:30.003472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.554 [2024-12-13 06:56:30.007573] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.554 [2024-12-13 06:56:30.007626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.554 [2024-12-13 06:56:30.007656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.554 [2024-12-13 06:56:30.011887] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.554 [2024-12-13 06:56:30.011928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.554 [2024-12-13 06:56:30.011941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.554 [2024-12-13 06:56:30.016341] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.554 [2024-12-13 06:56:30.016414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.554 [2024-12-13 06:56:30.016444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.554 [2024-12-13 06:56:30.020726] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.554 [2024-12-13 06:56:30.020765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.554 [2024-12-13 06:56:30.020779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.554 [2024-12-13 06:56:30.024993] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.554 [2024-12-13 06:56:30.025048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.554 [2024-12-13 06:56:30.025092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.554 [2024-12-13 06:56:30.029497] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.554 [2024-12-13 06:56:30.029571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.554 [2024-12-13 06:56:30.029585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.554 [2024-12-13 06:56:30.033860] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.554 [2024-12-13 06:56:30.033913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.554 [2024-12-13 06:56:30.033942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.554 [2024-12-13 06:56:30.038031] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.554 [2024-12-13 06:56:30.038083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.554 [2024-12-13 06:56:30.038111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.554 [2024-12-13 06:56:30.042395] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.554 [2024-12-13 06:56:30.042445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.554 [2024-12-13 06:56:30.042472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.554 [2024-12-13 06:56:30.046514] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.554 [2024-12-13 06:56:30.046565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.554 [2024-12-13 06:56:30.046594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.554 [2024-12-13 06:56:30.050525] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.554 [2024-12-13 06:56:30.050576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.554 [2024-12-13 06:56:30.050604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.554 [2024-12-13 06:56:30.054797] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.555 [2024-12-13 06:56:30.054849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.555 [2024-12-13 06:56:30.054877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.555 [2024-12-13 06:56:30.058944] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.555 [2024-12-13 06:56:30.058995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.555 [2024-12-13 06:56:30.059024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.555 [2024-12-13 06:56:30.063067] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.555 [2024-12-13 06:56:30.063119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.555 [2024-12-13 06:56:30.063147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.555 [2024-12-13 06:56:30.067554] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.555 [2024-12-13 06:56:30.067607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.555 [2024-12-13 06:56:30.067636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.816 [2024-12-13 06:56:30.072019] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.816 [2024-12-13 06:56:30.072059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.816 [2024-12-13 06:56:30.072074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.816 [2024-12-13 06:56:30.076576] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.816 [2024-12-13 06:56:30.076613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.816 [2024-12-13 06:56:30.076641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.816 [2024-12-13 06:56:30.080802] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.816 [2024-12-13 06:56:30.080853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.816 [2024-12-13 06:56:30.080882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.816 [2024-12-13 06:56:30.084918] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.816 [2024-12-13 06:56:30.084968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.816 [2024-12-13 06:56:30.084996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.816 [2024-12-13 06:56:30.089045] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.816 [2024-12-13 06:56:30.089097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.816 [2024-12-13 06:56:30.089125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.816 [2024-12-13 06:56:30.093085] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.816 [2024-12-13 06:56:30.093136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.816 [2024-12-13 06:56:30.093164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.816 [2024-12-13 06:56:30.097096] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.816 [2024-12-13 06:56:30.097146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.816 [2024-12-13 06:56:30.097174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.816 [2024-12-13 06:56:30.101187] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.816 [2024-12-13 06:56:30.101238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.816 [2024-12-13 06:56:30.101266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.816 [2024-12-13 06:56:30.105277] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.816 [2024-12-13 06:56:30.105328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.816 [2024-12-13 06:56:30.105356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.816 [2024-12-13 06:56:30.109447] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.816 [2024-12-13 06:56:30.109498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.816 [2024-12-13 06:56:30.109527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.816 [2024-12-13 06:56:30.113442] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.816 [2024-12-13 06:56:30.113492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.816 [2024-12-13 06:56:30.113520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.816 [2024-12-13 06:56:30.117407] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.816 [2024-12-13 06:56:30.117456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.816 [2024-12-13 06:56:30.117484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.816 [2024-12-13 06:56:30.121457] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.816 [2024-12-13 06:56:30.121507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.816 [2024-12-13 06:56:30.121535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.816 [2024-12-13 06:56:30.125446] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.816 [2024-12-13 06:56:30.125496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.816 [2024-12-13 06:56:30.125524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.816 [2024-12-13 06:56:30.129681] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.816 [2024-12-13 06:56:30.129718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.816 [2024-12-13 06:56:30.129746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.817 [2024-12-13 06:56:30.134154] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.817 [2024-12-13 06:56:30.134192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.817 [2024-12-13 06:56:30.134221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.817 [2024-12-13 06:56:30.138615] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.817 [2024-12-13 06:56:30.138665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.817 [2024-12-13 06:56:30.138694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.817 [2024-12-13 06:56:30.142625] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.817 [2024-12-13 06:56:30.142675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.817 [2024-12-13 06:56:30.142703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.817 [2024-12-13 06:56:30.146701] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.817 [2024-12-13 06:56:30.146751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.817 [2024-12-13 06:56:30.146779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.817 [2024-12-13 06:56:30.150745] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.817 [2024-12-13 06:56:30.150794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.817 [2024-12-13 06:56:30.150822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.817 [2024-12-13 06:56:30.154834] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.817 [2024-12-13 06:56:30.154884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.817 [2024-12-13 06:56:30.154912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.817 [2024-12-13 06:56:30.158889] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.817 [2024-12-13 06:56:30.158939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.817 [2024-12-13 06:56:30.158967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.817 [2024-12-13 06:56:30.162956] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.817 [2024-12-13 06:56:30.163006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.817 [2024-12-13 06:56:30.163034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.817 [2024-12-13 06:56:30.167008] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.817 [2024-12-13 06:56:30.167058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.817 [2024-12-13 06:56:30.167086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.817 [2024-12-13 06:56:30.171060] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.817 [2024-12-13 06:56:30.171110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.817 [2024-12-13 06:56:30.171138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.817 [2024-12-13 06:56:30.175073] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.817 [2024-12-13 06:56:30.175124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.817 [2024-12-13 06:56:30.175152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.817 [2024-12-13 06:56:30.179048] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.817 [2024-12-13 06:56:30.179098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.817 [2024-12-13 06:56:30.179126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.817 [2024-12-13 06:56:30.183085] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.817 [2024-12-13 06:56:30.183134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.817 [2024-12-13 06:56:30.183162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.817 [2024-12-13 06:56:30.187121] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.817 [2024-12-13 06:56:30.187171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.817 [2024-12-13 06:56:30.187199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.817 [2024-12-13 06:56:30.191124] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.817 [2024-12-13 06:56:30.191174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.817 [2024-12-13 06:56:30.191202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.817 [2024-12-13 06:56:30.195145] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.817 [2024-12-13 06:56:30.195195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.817 [2024-12-13 06:56:30.195223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.817 [2024-12-13 06:56:30.199223] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.817 [2024-12-13 06:56:30.199274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.817 [2024-12-13 06:56:30.199302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.817 [2024-12-13 06:56:30.203313] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.817 [2024-12-13 06:56:30.203389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.817 [2024-12-13 06:56:30.203403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.817 [2024-12-13 06:56:30.207330] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.817 [2024-12-13 06:56:30.207392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.817 [2024-12-13 06:56:30.207421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.817 [2024-12-13 06:56:30.211553] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.817 [2024-12-13 06:56:30.211603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.817 [2024-12-13 06:56:30.211631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.817 [2024-12-13 06:56:30.216017] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.817 [2024-12-13 06:56:30.216056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.817 [2024-12-13 06:56:30.216070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.817 [2024-12-13 06:56:30.220337] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.817 [2024-12-13 06:56:30.220395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.817 [2024-12-13 06:56:30.220423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.817 [2024-12-13 06:56:30.224618] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.817 [2024-12-13 06:56:30.224669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.817 [2024-12-13 06:56:30.224696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.817 [2024-12-13 06:56:30.228855] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.817 [2024-12-13 06:56:30.228906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.817 [2024-12-13 06:56:30.228935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.817 [2024-12-13 06:56:30.233483] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.817 [2024-12-13 06:56:30.233534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.817 [2024-12-13 06:56:30.233563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.817 [2024-12-13 06:56:30.237993] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.817 [2024-12-13 06:56:30.238028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.818 [2024-12-13 06:56:30.238057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.818 [2024-12-13 06:56:30.242697] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.818 [2024-12-13 06:56:30.242736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.818 [2024-12-13 06:56:30.242780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.818 [2024-12-13 06:56:30.247227] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.818 [2024-12-13 06:56:30.247278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.818 [2024-12-13 06:56:30.247306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.818 [2024-12-13 06:56:30.251930] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.818 [2024-12-13 06:56:30.251970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.818 [2024-12-13 06:56:30.251983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.818 [2024-12-13 06:56:30.256421] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.818 [2024-12-13 06:56:30.256485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.818 [2024-12-13 06:56:30.256500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.818 [2024-12-13 06:56:30.260887] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.818 [2024-12-13 06:56:30.260937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.818 [2024-12-13 06:56:30.260965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.818 [2024-12-13 06:56:30.265421] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.818 [2024-12-13 06:56:30.265484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.818 [2024-12-13 06:56:30.265515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.818 [2024-12-13 06:56:30.269896] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.818 [2024-12-13 06:56:30.269946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.818 [2024-12-13 06:56:30.269974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.818 [2024-12-13 06:56:30.274443] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.818 [2024-12-13 06:56:30.274523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.818 [2024-12-13 06:56:30.274552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.818 [2024-12-13 06:56:30.278795] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.818 [2024-12-13 06:56:30.278847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.818 [2024-12-13 06:56:30.278875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.818 [2024-12-13 06:56:30.282926] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.818 [2024-12-13 06:56:30.282976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.818 [2024-12-13 06:56:30.283004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.818 [2024-12-13 06:56:30.287005] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.818 [2024-12-13 06:56:30.287056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.818 [2024-12-13 06:56:30.287084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.818 [2024-12-13 06:56:30.291030] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.818 [2024-12-13 06:56:30.291081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.818 [2024-12-13 06:56:30.291109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.818 [2024-12-13 06:56:30.295068] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.818 [2024-12-13 06:56:30.295119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.818 [2024-12-13 06:56:30.295147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.818 [2024-12-13 06:56:30.299101] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.818 [2024-12-13 06:56:30.299151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.818 [2024-12-13 06:56:30.299179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.818 [2024-12-13 06:56:30.303148] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.818 [2024-12-13 06:56:30.303198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.818 [2024-12-13 06:56:30.303226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.818 [2024-12-13 06:56:30.307231] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.818 [2024-12-13 06:56:30.307282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.818 [2024-12-13 06:56:30.307310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.818 [2024-12-13 06:56:30.311315] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.818 [2024-12-13 06:56:30.311390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.818 [2024-12-13 06:56:30.311404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.818 [2024-12-13 06:56:30.315406] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.818 [2024-12-13 06:56:30.315458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.818 [2024-12-13 06:56:30.315486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.818 [2024-12-13 06:56:30.319379] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.818 [2024-12-13 06:56:30.319429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.818 [2024-12-13 06:56:30.319456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.818 [2024-12-13 06:56:30.323307] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.818 [2024-12-13 06:56:30.323381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.818 [2024-12-13 06:56:30.323395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.818 [2024-12-13 06:56:30.327344] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.818 [2024-12-13 06:56:30.327403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.818 [2024-12-13 06:56:30.327432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.818 [2024-12-13 06:56:30.331813] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:25.818 [2024-12-13 06:56:30.331850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.818 [2024-12-13 06:56:30.331925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.079 [2024-12-13 06:56:30.336359] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.079 [2024-12-13 06:56:30.336419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.079 [2024-12-13 06:56:30.336448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:26.079 [2024-12-13 06:56:30.340624] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.079 [2024-12-13 06:56:30.340659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.079 [2024-12-13 06:56:30.340688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:26.079 [2024-12-13 06:56:30.344805] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.079 [2024-12-13 06:56:30.344859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.079 [2024-12-13 06:56:30.344888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:26.079 [2024-12-13 06:56:30.348909] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.079 [2024-12-13 06:56:30.348960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.079 [2024-12-13 06:56:30.348989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.079 [2024-12-13 06:56:30.353098] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.079 [2024-12-13 06:56:30.353164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.079 [2024-12-13 06:56:30.353191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:26.079 [2024-12-13 06:56:30.357265] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.079 [2024-12-13 06:56:30.357316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.079 [2024-12-13 06:56:30.357343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:26.079 [2024-12-13 06:56:30.361350] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.079 [2024-12-13 06:56:30.361409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.079 [2024-12-13 06:56:30.361438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:26.079 [2024-12-13 06:56:30.365452] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.079 [2024-12-13 06:56:30.365503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.079 [2024-12-13 06:56:30.365531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.079 [2024-12-13 06:56:30.369473] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.079 [2024-12-13 06:56:30.369523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.079 [2024-12-13 06:56:30.369551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:26.079 [2024-12-13 06:56:30.373536] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.079 [2024-12-13 06:56:30.373586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.079 [2024-12-13 06:56:30.373613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:26.079 [2024-12-13 06:56:30.377573] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.079 [2024-12-13 06:56:30.377623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.080 [2024-12-13 06:56:30.377650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:26.080 [2024-12-13 06:56:30.381663] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.080 [2024-12-13 06:56:30.381713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.080 [2024-12-13 06:56:30.381740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.080 [2024-12-13 06:56:30.385689] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.080 [2024-12-13 06:56:30.385739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.080 [2024-12-13 06:56:30.385767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:26.080 [2024-12-13 06:56:30.390023] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.080 [2024-12-13 06:56:30.390076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.080 [2024-12-13 06:56:30.390105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:26.080 [2024-12-13 06:56:30.394717] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.080 [2024-12-13 06:56:30.394769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.080 [2024-12-13 06:56:30.394813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:26.080 [2024-12-13 06:56:30.398825] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.080 [2024-12-13 06:56:30.398875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.080 [2024-12-13 06:56:30.398902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.080 [2024-12-13 06:56:30.402950] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.080 [2024-12-13 06:56:30.403000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.080 [2024-12-13 06:56:30.403028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:26.080 [2024-12-13 06:56:30.407161] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.080 [2024-12-13 06:56:30.407214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.080 [2024-12-13 06:56:30.407242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:26.080 [2024-12-13 06:56:30.411515] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.080 [2024-12-13 06:56:30.411559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.080 [2024-12-13 06:56:30.411573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:26.080 [2024-12-13 06:56:30.415951] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.080 [2024-12-13 06:56:30.415991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.080 [2024-12-13 06:56:30.416004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.080 [2024-12-13 06:56:30.420476] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.080 [2024-12-13 06:56:30.420513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.080 [2024-12-13 06:56:30.420553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:26.080 [2024-12-13 06:56:30.425071] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.080 [2024-12-13 06:56:30.425136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.080 [2024-12-13 06:56:30.425164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:26.080 [2024-12-13 06:56:30.429469] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.080 [2024-12-13 06:56:30.429518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.080 [2024-12-13 06:56:30.429546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:26.080 [2024-12-13 06:56:30.433769] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.080 [2024-12-13 06:56:30.433822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.080 [2024-12-13 06:56:30.433851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.080 [2024-12-13 06:56:30.438138] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.080 [2024-12-13 06:56:30.438188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.080 [2024-12-13 06:56:30.438215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:26.080 [2024-12-13 06:56:30.442433] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.080 [2024-12-13 06:56:30.442482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.080 [2024-12-13 06:56:30.442510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:26.080 [2024-12-13 06:56:30.446687] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.080 [2024-12-13 06:56:30.446755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.080 [2024-12-13 06:56:30.446784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:26.080 [2024-12-13 06:56:30.450909] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.080 [2024-12-13 06:56:30.450960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.080 [2024-12-13 06:56:30.450989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.080 [2024-12-13 06:56:30.455174] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.080 [2024-12-13 06:56:30.455225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.080 [2024-12-13 06:56:30.455253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:26.080 [2024-12-13 06:56:30.459246] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.080 [2024-12-13 06:56:30.459297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.080 [2024-12-13 06:56:30.459325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:26.080 [2024-12-13 06:56:30.463836] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.080 [2024-12-13 06:56:30.463909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.080 [2024-12-13 06:56:30.463938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:26.080 [2024-12-13 06:56:30.469280] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.080 [2024-12-13 06:56:30.469330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.080 [2024-12-13 06:56:30.469357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.080 [2024-12-13 06:56:30.473576] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.080 [2024-12-13 06:56:30.473612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.080 [2024-12-13 06:56:30.473640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:26.080 [2024-12-13 06:56:30.477536] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.080 [2024-12-13 06:56:30.477572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.080 [2024-12-13 06:56:30.477599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:26.080 [2024-12-13 06:56:30.481481] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.080 [2024-12-13 06:56:30.481516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.080 [2024-12-13 06:56:30.481544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:26.080 [2024-12-13 06:56:30.485455] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.080 [2024-12-13 06:56:30.485485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.080 [2024-12-13 06:56:30.485513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.080 [2024-12-13 06:56:30.489394] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.080 [2024-12-13 06:56:30.489427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.080 [2024-12-13 06:56:30.489455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:26.080 [2024-12-13 06:56:30.493410] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.081 [2024-12-13 06:56:30.493442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.081 [2024-12-13 06:56:30.493470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:26.081 [2024-12-13 06:56:30.497416] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.081 [2024-12-13 06:56:30.497448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.081 [2024-12-13 06:56:30.497477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:26.081 [2024-12-13 06:56:30.501410] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.081 [2024-12-13 06:56:30.501443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.081 [2024-12-13 06:56:30.501471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.081 [2024-12-13 06:56:30.505274] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.081 [2024-12-13 06:56:30.505323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.081 [2024-12-13 06:56:30.505351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:26.081 [2024-12-13 06:56:30.509308] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.081 [2024-12-13 06:56:30.509383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.081 [2024-12-13 06:56:30.509398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:26.081 [2024-12-13 06:56:30.513289] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.081 [2024-12-13 06:56:30.513340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.081 [2024-12-13 06:56:30.513377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:26.081 [2024-12-13 06:56:30.517267] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.081 [2024-12-13 06:56:30.517317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.081 [2024-12-13 06:56:30.517346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.081 [2024-12-13 06:56:30.521223] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.081 [2024-12-13 06:56:30.521273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.081 [2024-12-13 06:56:30.521301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:26.081 [2024-12-13 06:56:30.525257] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.081 [2024-12-13 06:56:30.525308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.081 [2024-12-13 06:56:30.525335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:26.081 [2024-12-13 06:56:30.529559] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.081 [2024-12-13 06:56:30.529595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.081 [2024-12-13 06:56:30.529624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:26.081 [2024-12-13 06:56:30.533978] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.081 [2024-12-13 06:56:30.534019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.081 [2024-12-13 06:56:30.534033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.081 [2024-12-13 06:56:30.538557] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.081 [2024-12-13 06:56:30.538608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.081 [2024-12-13 06:56:30.538636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:26.081 [2024-12-13 06:56:30.543318] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.081 [2024-12-13 06:56:30.543378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.081 [2024-12-13 06:56:30.543408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:26.081 [2024-12-13 06:56:30.547898] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.081 [2024-12-13 06:56:30.547938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.081 [2024-12-13 06:56:30.547951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:26.081 [2024-12-13 06:56:30.552464] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.081 [2024-12-13 06:56:30.552514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.081 [2024-12-13 06:56:30.552542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.081 [2024-12-13 06:56:30.556967] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.081 [2024-12-13 06:56:30.557020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.081 [2024-12-13 06:56:30.557049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:26.081 [2024-12-13 06:56:30.561345] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.081 [2024-12-13 06:56:30.561404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.081 [2024-12-13 06:56:30.561433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:26.081 [2024-12-13 06:56:30.565516] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.081 [2024-12-13 06:56:30.565551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.081 [2024-12-13 06:56:30.565580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:26.081 [2024-12-13 06:56:30.569652] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.081 [2024-12-13 06:56:30.569688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.081 [2024-12-13 06:56:30.569733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.081 [2024-12-13 06:56:30.573611] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.081 [2024-12-13 06:56:30.573647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.081 [2024-12-13 06:56:30.573675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:26.081 [2024-12-13 06:56:30.577561] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.081 [2024-12-13 06:56:30.577596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.081 [2024-12-13 06:56:30.577624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:26.081 [2024-12-13 06:56:30.581469] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.081 [2024-12-13 06:56:30.581504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.082 [2024-12-13 06:56:30.581516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:26.082 [2024-12-13 06:56:30.585400] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.082 [2024-12-13 06:56:30.585434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.082 [2024-12-13 06:56:30.585462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.082 [2024-12-13 06:56:30.589454] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.082 [2024-12-13 06:56:30.589487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.082 [2024-12-13 06:56:30.589516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:26.082 [2024-12-13 06:56:30.593660] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.082 [2024-12-13 06:56:30.593696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.082 [2024-12-13 06:56:30.593725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:26.342 [2024-12-13 06:56:30.598381] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.342 [2024-12-13 06:56:30.598445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.342 [2024-12-13 06:56:30.598475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:26.342 [2024-12-13 06:56:30.602416] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.342 [2024-12-13 06:56:30.602482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.342 [2024-12-13 06:56:30.602513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.342 [2024-12-13 06:56:30.606796] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.342 [2024-12-13 06:56:30.606846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.342 [2024-12-13 06:56:30.606874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:26.342 [2024-12-13 06:56:30.610814] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.342 [2024-12-13 06:56:30.610864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.342 [2024-12-13 06:56:30.610891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:26.342 [2024-12-13 06:56:30.614973] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.342 [2024-12-13 06:56:30.615024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.342 [2024-12-13 06:56:30.615052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:26.342 [2024-12-13 06:56:30.619001] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.342 [2024-12-13 06:56:30.619052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.342 [2024-12-13 06:56:30.619080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.342 [2024-12-13 06:56:30.623047] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.342 [2024-12-13 06:56:30.623098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.342 [2024-12-13 06:56:30.623127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:26.342 [2024-12-13 06:56:30.627095] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.342 [2024-12-13 06:56:30.627145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.342 [2024-12-13 06:56:30.627174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:26.342 [2024-12-13 06:56:30.631155] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.343 [2024-12-13 06:56:30.631207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.343 [2024-12-13 06:56:30.631235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:26.343 [2024-12-13 06:56:30.635208] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.343 [2024-12-13 06:56:30.635259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.343 [2024-12-13 06:56:30.635287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.343 [2024-12-13 06:56:30.639337] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.343 [2024-12-13 06:56:30.639397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.343 [2024-12-13 06:56:30.639425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:26.343 [2024-12-13 06:56:30.643469] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.343 [2024-12-13 06:56:30.643519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.343 [2024-12-13 06:56:30.643547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:26.343 [2024-12-13 06:56:30.648048] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.343 [2024-12-13 06:56:30.648088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.343 [2024-12-13 06:56:30.648102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:26.343 [2024-12-13 06:56:30.652769] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.343 [2024-12-13 06:56:30.652821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.343 [2024-12-13 06:56:30.652850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.343 [2024-12-13 06:56:30.657317] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.343 [2024-12-13 06:56:30.657363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.343 [2024-12-13 06:56:30.657393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:26.343 [2024-12-13 06:56:30.661532] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.343 [2024-12-13 06:56:30.661569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.343 [2024-12-13 06:56:30.661598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:26.343 [2024-12-13 06:56:30.666257] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.343 [2024-12-13 06:56:30.666309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.343 [2024-12-13 06:56:30.666337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:26.343 [2024-12-13 06:56:30.670549] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.343 [2024-12-13 06:56:30.670599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.343 [2024-12-13 06:56:30.670628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.343 [2024-12-13 06:56:30.674796] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.343 [2024-12-13 06:56:30.674848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.343 [2024-12-13 06:56:30.674878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:26.343 [2024-12-13 06:56:30.679110] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.343 [2024-12-13 06:56:30.679160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.343 [2024-12-13 06:56:30.679188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:26.343 [2024-12-13 06:56:30.683373] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.343 [2024-12-13 06:56:30.683433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.343 [2024-12-13 06:56:30.683462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:26.343 [2024-12-13 06:56:30.687682] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.343 [2024-12-13 06:56:30.687749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.343 [2024-12-13 06:56:30.687778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.343 [2024-12-13 06:56:30.691964] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.343 [2024-12-13 06:56:30.692003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.343 [2024-12-13 06:56:30.692033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:26.343 [2024-12-13 06:56:30.696279] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.343 [2024-12-13 06:56:30.696329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.343 [2024-12-13 06:56:30.696358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:26.343 [2024-12-13 06:56:30.700387] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.343 [2024-12-13 06:56:30.700450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.343 [2024-12-13 06:56:30.700479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:26.343 [2024-12-13 06:56:30.704652] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.343 [2024-12-13 06:56:30.704706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.343 [2024-12-13 06:56:30.704718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.343 [2024-12-13 06:56:30.709209] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.343 [2024-12-13 06:56:30.709261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.343 [2024-12-13 06:56:30.709291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:26.343 [2024-12-13 06:56:30.713871] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.343 [2024-12-13 06:56:30.713923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.343 [2024-12-13 06:56:30.713952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:26.343 [2024-12-13 06:56:30.718403] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.343 [2024-12-13 06:56:30.718444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.343 [2024-12-13 06:56:30.718474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:26.343 [2024-12-13 06:56:30.722870] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.343 [2024-12-13 06:56:30.722921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.343 [2024-12-13 06:56:30.722950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.343 [2024-12-13 06:56:30.727331] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.343 [2024-12-13 06:56:30.727407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.343 [2024-12-13 06:56:30.727421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:26.343 [2024-12-13 06:56:30.731594] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.343 [2024-12-13 06:56:30.731631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.343 [2024-12-13 06:56:30.731660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:26.344 [2024-12-13 06:56:30.736012] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.344 [2024-12-13 06:56:30.736051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.344 [2024-12-13 06:56:30.736065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:26.344 [2024-12-13 06:56:30.740436] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.344 [2024-12-13 06:56:30.740487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.344 [2024-12-13 06:56:30.740516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.344 [2024-12-13 06:56:30.744649] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.344 [2024-12-13 06:56:30.744715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.344 [2024-12-13 06:56:30.744744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:26.344 [2024-12-13 06:56:30.748934] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.344 [2024-12-13 06:56:30.748986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.344 [2024-12-13 06:56:30.749016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:26.344 [2024-12-13 06:56:30.753069] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20680) 00:17:26.344 [2024-12-13 06:56:30.753120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.344 [2024-12-13 06:56:30.753148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:26.344 00:17:26.344 Latency(us) 00:17:26.344 [2024-12-13T06:56:30.863Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:26.344 [2024-12-13T06:56:30.863Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:17:26.344 nvme0n1 : 2.00 7293.36 911.67 0.00 0.00 2190.39 1742.66 6911.07 00:17:26.344 [2024-12-13T06:56:30.863Z] =================================================================================================================== 00:17:26.344 [2024-12-13T06:56:30.863Z] Total : 7293.36 911.67 0.00 0.00 2190.39 1742.66 6911.07 00:17:26.344 0 00:17:26.344 06:56:30 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:26.344 06:56:30 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:26.344 06:56:30 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:26.344 | .driver_specific 00:17:26.344 | .nvme_error 00:17:26.344 | .status_code 00:17:26.344 | .command_transient_transport_error' 00:17:26.344 06:56:30 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:26.603 06:56:31 -- host/digest.sh@71 -- # (( 471 > 0 )) 00:17:26.603 06:56:31 -- host/digest.sh@73 -- # killprocess 83798 00:17:26.603 06:56:31 -- common/autotest_common.sh@936 -- # '[' -z 83798 ']' 00:17:26.603 06:56:31 -- common/autotest_common.sh@940 -- # kill -0 83798 00:17:26.603 06:56:31 -- common/autotest_common.sh@941 -- # uname 00:17:26.603 06:56:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:26.603 06:56:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83798 00:17:26.603 killing process with pid 83798 00:17:26.603 Received shutdown signal, test time was about 2.000000 seconds 00:17:26.603 00:17:26.603 Latency(us) 00:17:26.603 [2024-12-13T06:56:31.122Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:26.603 [2024-12-13T06:56:31.122Z] =================================================================================================================== 00:17:26.603 [2024-12-13T06:56:31.122Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:26.603 06:56:31 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:26.603 06:56:31 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:26.603 06:56:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83798' 00:17:26.603 06:56:31 -- common/autotest_common.sh@955 -- # kill 83798 00:17:26.603 06:56:31 -- common/autotest_common.sh@960 -- # wait 83798 00:17:26.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:26.862 06:56:31 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:17:26.862 06:56:31 -- host/digest.sh@54 -- # local rw bs qd 00:17:26.862 06:56:31 -- host/digest.sh@56 -- # rw=randwrite 00:17:26.862 06:56:31 -- host/digest.sh@56 -- # bs=4096 00:17:26.862 06:56:31 -- host/digest.sh@56 -- # qd=128 00:17:26.862 06:56:31 -- host/digest.sh@58 -- # bperfpid=83846 00:17:26.862 06:56:31 -- host/digest.sh@60 -- # waitforlisten 83846 /var/tmp/bperf.sock 00:17:26.862 06:56:31 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:17:26.862 06:56:31 -- common/autotest_common.sh@829 -- # '[' -z 83846 ']' 00:17:26.862 06:56:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:26.862 06:56:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:26.862 06:56:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:26.862 06:56:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:26.862 06:56:31 -- common/autotest_common.sh@10 -- # set +x 00:17:26.862 [2024-12-13 06:56:31.271890] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:26.862 [2024-12-13 06:56:31.272443] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83846 ] 00:17:27.121 [2024-12-13 06:56:31.411937] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.121 [2024-12-13 06:56:31.445267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:28.054 06:56:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:28.054 06:56:32 -- common/autotest_common.sh@862 -- # return 0 00:17:28.054 06:56:32 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:28.054 06:56:32 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:28.054 06:56:32 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:28.055 06:56:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.055 06:56:32 -- common/autotest_common.sh@10 -- # set +x 00:17:28.055 06:56:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.055 06:56:32 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:28.055 06:56:32 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:28.313 nvme0n1 00:17:28.313 06:56:32 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:17:28.313 06:56:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.313 06:56:32 -- common/autotest_common.sh@10 -- # set +x 00:17:28.313 06:56:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.313 06:56:32 -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:28.313 06:56:32 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:28.572 Running I/O for 2 seconds... 00:17:28.572 [2024-12-13 06:56:32.947001] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190ddc00 00:17:28.572 [2024-12-13 06:56:32.948470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.572 [2024-12-13 06:56:32.948531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.572 [2024-12-13 06:56:32.962079] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190fef90 00:17:28.572 [2024-12-13 06:56:32.963418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.572 [2024-12-13 06:56:32.963492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.572 [2024-12-13 06:56:32.976514] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190ff3c8 00:17:28.572 [2024-12-13 06:56:32.977951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.572 [2024-12-13 06:56:32.978000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:28.572 [2024-12-13 06:56:32.991999] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190feb58 00:17:28.572 [2024-12-13 06:56:32.993422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.572 [2024-12-13 06:56:32.993456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:28.572 [2024-12-13 06:56:33.006809] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190fe720 00:17:28.572 [2024-12-13 06:56:33.008472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.572 [2024-12-13 06:56:33.008504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:28.572 [2024-12-13 06:56:33.023477] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190fe2e8 00:17:28.572 [2024-12-13 06:56:33.024945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:6149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.572 [2024-12-13 06:56:33.024980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:28.572 [2024-12-13 06:56:33.039082] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190fdeb0 00:17:28.572 [2024-12-13 06:56:33.040530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.572 [2024-12-13 06:56:33.040567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:28.572 [2024-12-13 06:56:33.054602] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190fda78 00:17:28.572 [2024-12-13 06:56:33.056057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.572 [2024-12-13 06:56:33.056088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:28.572 [2024-12-13 06:56:33.069598] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190fd640 00:17:28.572 [2024-12-13 06:56:33.071002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:6903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.572 [2024-12-13 06:56:33.071032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:28.572 [2024-12-13 06:56:33.084539] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190fd208 00:17:28.572 [2024-12-13 06:56:33.085818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:8785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.572 [2024-12-13 06:56:33.085852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:28.832 [2024-12-13 06:56:33.100671] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190fcdd0 00:17:28.832 [2024-12-13 06:56:33.101980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.832 [2024-12-13 06:56:33.102016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:28.832 [2024-12-13 06:56:33.115573] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190fc998 00:17:28.832 [2024-12-13 06:56:33.116857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.832 [2024-12-13 06:56:33.116892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:28.832 [2024-12-13 06:56:33.130307] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190fc560 00:17:28.832 [2024-12-13 06:56:33.131815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:20067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.832 [2024-12-13 06:56:33.131846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:28.832 [2024-12-13 06:56:33.145570] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190fc128 00:17:28.832 [2024-12-13 06:56:33.147027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:5568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.832 [2024-12-13 06:56:33.147096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:28.832 [2024-12-13 06:56:33.161684] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190fbcf0 00:17:28.832 [2024-12-13 06:56:33.162890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.832 [2024-12-13 06:56:33.162923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:28.832 [2024-12-13 06:56:33.176472] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190fb8b8 00:17:28.832 [2024-12-13 06:56:33.177746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.832 [2024-12-13 06:56:33.177781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:28.832 [2024-12-13 06:56:33.191196] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190fb480 00:17:28.832 [2024-12-13 06:56:33.192594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:17364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.832 [2024-12-13 06:56:33.192628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:28.832 [2024-12-13 06:56:33.205529] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190fb048 00:17:28.832 [2024-12-13 06:56:33.206694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:3265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.832 [2024-12-13 06:56:33.206727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:28.832 [2024-12-13 06:56:33.219572] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190fac10 00:17:28.832 [2024-12-13 06:56:33.221065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.832 [2024-12-13 06:56:33.221101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:28.832 [2024-12-13 06:56:33.234152] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190fa7d8 00:17:28.832 [2024-12-13 06:56:33.235449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.832 [2024-12-13 06:56:33.235659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:28.832 [2024-12-13 06:56:33.249682] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190fa3a0 00:17:28.832 [2024-12-13 06:56:33.250967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:17984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.832 [2024-12-13 06:56:33.251178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:28.832 [2024-12-13 06:56:33.264516] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190f9f68 00:17:28.832 [2024-12-13 06:56:33.265831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:16689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.832 [2024-12-13 06:56:33.266040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:28.832 [2024-12-13 06:56:33.278970] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190f9b30 00:17:28.832 [2024-12-13 06:56:33.280333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:9828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.832 [2024-12-13 06:56:33.280576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:28.832 [2024-12-13 06:56:33.293536] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190f96f8 00:17:28.832 [2024-12-13 06:56:33.294834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.832 [2024-12-13 06:56:33.295048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:28.832 [2024-12-13 06:56:33.309313] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190f92c0 00:17:28.832 [2024-12-13 06:56:33.310661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.832 [2024-12-13 06:56:33.310860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:28.832 [2024-12-13 06:56:33.325029] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190f8e88 00:17:28.832 [2024-12-13 06:56:33.326311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:10845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.832 [2024-12-13 06:56:33.326570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:28.832 [2024-12-13 06:56:33.340274] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190f8a50 00:17:28.832 [2024-12-13 06:56:33.341557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:2698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.832 [2024-12-13 06:56:33.341749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:29.092 [2024-12-13 06:56:33.356283] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190f8618 00:17:29.092 [2024-12-13 06:56:33.357618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:16416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.092 [2024-12-13 06:56:33.357844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:29.092 [2024-12-13 06:56:33.372376] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190f81e0 00:17:29.092 [2024-12-13 06:56:33.373671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:19830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.092 [2024-12-13 06:56:33.373893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:29.092 [2024-12-13 06:56:33.388872] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190f7da8 00:17:29.092 [2024-12-13 06:56:33.390179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:4866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.092 [2024-12-13 06:56:33.390210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:29.092 [2024-12-13 06:56:33.407688] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190f7970 00:17:29.092 [2024-12-13 06:56:33.408895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.092 [2024-12-13 06:56:33.408932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:29.092 [2024-12-13 06:56:33.422920] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190f7538 00:17:29.092 [2024-12-13 06:56:33.424367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:21568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.092 [2024-12-13 06:56:33.424554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:29.092 [2024-12-13 06:56:33.438715] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190f7100 00:17:29.092 [2024-12-13 06:56:33.439791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:18902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.092 [2024-12-13 06:56:33.439829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.092 [2024-12-13 06:56:33.454648] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190f6cc8 00:17:29.092 [2024-12-13 06:56:33.455750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:17230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.092 [2024-12-13 06:56:33.455802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:29.092 [2024-12-13 06:56:33.470000] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190f6890 00:17:29.092 [2024-12-13 06:56:33.471160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:22172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.092 [2024-12-13 06:56:33.471211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:29.092 [2024-12-13 06:56:33.485020] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190f6458 00:17:29.092 [2024-12-13 06:56:33.486120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.092 [2024-12-13 06:56:33.486172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:29.092 [2024-12-13 06:56:33.501353] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190f6020 00:17:29.092 [2024-12-13 06:56:33.502517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:9811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.092 [2024-12-13 06:56:33.502594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:29.092 [2024-12-13 06:56:33.515918] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190f5be8 00:17:29.092 [2024-12-13 06:56:33.516948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:17120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.092 [2024-12-13 06:56:33.516997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:29.092 [2024-12-13 06:56:33.530352] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190f57b0 00:17:29.092 [2024-12-13 06:56:33.531427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:23915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.092 [2024-12-13 06:56:33.531484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:29.092 [2024-12-13 06:56:33.545068] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190f5378 00:17:29.092 [2024-12-13 06:56:33.546136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:11918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.092 [2024-12-13 06:56:33.546183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:29.092 [2024-12-13 06:56:33.559488] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190f4f40 00:17:29.092 [2024-12-13 06:56:33.560565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.092 [2024-12-13 06:56:33.560613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:29.092 [2024-12-13 06:56:33.573855] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190f4b08 00:17:29.092 [2024-12-13 06:56:33.574879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.092 [2024-12-13 06:56:33.574928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:29.092 [2024-12-13 06:56:33.589568] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190f46d0 00:17:29.092 [2024-12-13 06:56:33.590595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.092 [2024-12-13 06:56:33.590651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:29.092 [2024-12-13 06:56:33.606471] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190f4298 00:17:29.092 [2024-12-13 06:56:33.607467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:1111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.092 [2024-12-13 06:56:33.607504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:29.352 [2024-12-13 06:56:33.622868] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190f3e60 00:17:29.352 [2024-12-13 06:56:33.623961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.352 [2024-12-13 06:56:33.624000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:29.352 [2024-12-13 06:56:33.638077] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190f3a28 00:17:29.352 [2024-12-13 06:56:33.639044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:15223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.352 [2024-12-13 06:56:33.639077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:29.352 [2024-12-13 06:56:33.652630] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190f35f0 00:17:29.352 [2024-12-13 06:56:33.653549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:7634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.352 [2024-12-13 06:56:33.653581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:29.352 [2024-12-13 06:56:33.666896] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190f31b8 00:17:29.352 [2024-12-13 06:56:33.667833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.352 [2024-12-13 06:56:33.667889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:29.352 [2024-12-13 06:56:33.681149] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190f2d80 00:17:29.352 [2024-12-13 06:56:33.682095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:15493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.352 [2024-12-13 06:56:33.682128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:29.352 [2024-12-13 06:56:33.695520] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190f2948 00:17:29.352 [2024-12-13 06:56:33.696490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:12008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.352 [2024-12-13 06:56:33.696523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:29.352 [2024-12-13 06:56:33.709699] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190f2510 00:17:29.352 [2024-12-13 06:56:33.710586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:11760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.352 [2024-12-13 06:56:33.710618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:29.352 [2024-12-13 06:56:33.723792] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190f20d8 00:17:29.352 [2024-12-13 06:56:33.724712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:18296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.352 [2024-12-13 06:56:33.724775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:29.352 [2024-12-13 06:56:33.737987] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190f1ca0 00:17:29.352 [2024-12-13 06:56:33.738857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:21805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.352 [2024-12-13 06:56:33.738889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:29.352 [2024-12-13 06:56:33.752356] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190f1868 00:17:29.352 [2024-12-13 06:56:33.753240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:17012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.352 [2024-12-13 06:56:33.753271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:29.352 [2024-12-13 06:56:33.767851] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190f1430 00:17:29.352 [2024-12-13 06:56:33.768768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.352 [2024-12-13 06:56:33.768801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:29.352 [2024-12-13 06:56:33.782326] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190f0ff8 00:17:29.352 [2024-12-13 06:56:33.783171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:8459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.352 [2024-12-13 06:56:33.783202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:29.352 [2024-12-13 06:56:33.796780] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190f0bc0 00:17:29.352 [2024-12-13 06:56:33.797632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:10363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.352 [2024-12-13 06:56:33.797664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:29.352 [2024-12-13 06:56:33.812365] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190f0788 00:17:29.352 [2024-12-13 06:56:33.813300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.352 [2024-12-13 06:56:33.813374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:29.352 [2024-12-13 06:56:33.828872] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190f0350 00:17:29.352 [2024-12-13 06:56:33.829758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:12616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.352 [2024-12-13 06:56:33.829793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:29.352 [2024-12-13 06:56:33.845377] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190eff18 00:17:29.352 [2024-12-13 06:56:33.846259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:14551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.352 [2024-12-13 06:56:33.846306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:29.352 [2024-12-13 06:56:33.861896] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190efae0 00:17:29.352 [2024-12-13 06:56:33.862788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:9921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.352 [2024-12-13 06:56:33.862824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:29.612 [2024-12-13 06:56:33.879271] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190ef6a8 00:17:29.612 [2024-12-13 06:56:33.880096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.612 [2024-12-13 06:56:33.880134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:29.612 [2024-12-13 06:56:33.896726] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190ef270 00:17:29.612 [2024-12-13 06:56:33.897678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:25256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.612 [2024-12-13 06:56:33.897727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:29.612 [2024-12-13 06:56:33.911253] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190eee38 00:17:29.612 [2024-12-13 06:56:33.912059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.612 [2024-12-13 06:56:33.912112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:29.612 [2024-12-13 06:56:33.925664] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190eea00 00:17:29.612 [2024-12-13 06:56:33.926439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.612 [2024-12-13 06:56:33.926512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.612 [2024-12-13 06:56:33.940023] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190ee5c8 00:17:29.612 [2024-12-13 06:56:33.940894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.612 [2024-12-13 06:56:33.940956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:29.612 [2024-12-13 06:56:33.954691] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190ee190 00:17:29.612 [2024-12-13 06:56:33.955490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:18813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.612 [2024-12-13 06:56:33.955541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:29.612 [2024-12-13 06:56:33.969536] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190edd58 00:17:29.612 [2024-12-13 06:56:33.970281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.612 [2024-12-13 06:56:33.970331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:29.612 [2024-12-13 06:56:33.983851] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190ed920 00:17:29.612 [2024-12-13 06:56:33.984626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:7468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.612 [2024-12-13 06:56:33.984677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:29.612 [2024-12-13 06:56:33.998221] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190ed4e8 00:17:29.612 [2024-12-13 06:56:33.998979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.612 [2024-12-13 06:56:33.999031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:29.612 [2024-12-13 06:56:34.012719] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190ed0b0 00:17:29.612 [2024-12-13 06:56:34.013445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:22574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.612 [2024-12-13 06:56:34.013506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:29.612 [2024-12-13 06:56:34.027939] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190ecc78 00:17:29.612 [2024-12-13 06:56:34.028655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.612 [2024-12-13 06:56:34.028705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:29.612 [2024-12-13 06:56:34.042283] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190ec840 00:17:29.612 [2024-12-13 06:56:34.043020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:19362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.612 [2024-12-13 06:56:34.043069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:29.612 [2024-12-13 06:56:34.056964] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190ec408 00:17:29.612 [2024-12-13 06:56:34.057695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:3623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.613 [2024-12-13 06:56:34.057745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:29.613 [2024-12-13 06:56:34.071624] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190ebfd0 00:17:29.613 [2024-12-13 06:56:34.072292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:3606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.613 [2024-12-13 06:56:34.072381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:29.613 [2024-12-13 06:56:34.087895] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190ebb98 00:17:29.613 [2024-12-13 06:56:34.088568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.613 [2024-12-13 06:56:34.088618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:29.613 [2024-12-13 06:56:34.103173] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190eb760 00:17:29.613 [2024-12-13 06:56:34.103833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:10679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.613 [2024-12-13 06:56:34.103877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:29.613 [2024-12-13 06:56:34.118393] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190eb328 00:17:29.613 [2024-12-13 06:56:34.119064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:10109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.613 [2024-12-13 06:56:34.119107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:29.872 [2024-12-13 06:56:34.134053] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190eaef0 00:17:29.872 [2024-12-13 06:56:34.134737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:14720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.872 [2024-12-13 06:56:34.134796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:29.872 [2024-12-13 06:56:34.148391] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190eaab8 00:17:29.872 [2024-12-13 06:56:34.149031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:21962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.872 [2024-12-13 06:56:34.149076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:29.872 [2024-12-13 06:56:34.163591] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190ea680 00:17:29.872 [2024-12-13 06:56:34.164226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:24939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.872 [2024-12-13 06:56:34.164279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:29.872 [2024-12-13 06:56:34.179500] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190ea248 00:17:29.872 [2024-12-13 06:56:34.180125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:20158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.872 [2024-12-13 06:56:34.180168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:29.872 [2024-12-13 06:56:34.194350] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190e9e10 00:17:29.872 [2024-12-13 06:56:34.195021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:19875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.872 [2024-12-13 06:56:34.195070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:29.872 [2024-12-13 06:56:34.209385] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190e99d8 00:17:29.872 [2024-12-13 06:56:34.209953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:9374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.872 [2024-12-13 06:56:34.209998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:29.872 [2024-12-13 06:56:34.223931] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190e95a0 00:17:29.872 [2024-12-13 06:56:34.224554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:23728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.872 [2024-12-13 06:56:34.224594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:29.872 [2024-12-13 06:56:34.238568] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190e9168 00:17:29.872 [2024-12-13 06:56:34.239161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:20677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.872 [2024-12-13 06:56:34.239202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:29.872 [2024-12-13 06:56:34.253267] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190e8d30 00:17:29.872 [2024-12-13 06:56:34.253828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:16554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.872 [2024-12-13 06:56:34.253872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:29.872 [2024-12-13 06:56:34.268107] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190e88f8 00:17:29.872 [2024-12-13 06:56:34.268684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.872 [2024-12-13 06:56:34.268726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:29.872 [2024-12-13 06:56:34.283495] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190e84c0 00:17:29.873 [2024-12-13 06:56:34.284023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:13159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.873 [2024-12-13 06:56:34.284063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:29.873 [2024-12-13 06:56:34.298537] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190e8088 00:17:29.873 [2024-12-13 06:56:34.299046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:18400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.873 [2024-12-13 06:56:34.299085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:29.873 [2024-12-13 06:56:34.313545] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190e7c50 00:17:29.873 [2024-12-13 06:56:34.314077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.873 [2024-12-13 06:56:34.314130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:29.873 [2024-12-13 06:56:34.330129] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190e7818 00:17:29.873 [2024-12-13 06:56:34.330656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.873 [2024-12-13 06:56:34.330695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:29.873 [2024-12-13 06:56:34.347645] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190e73e0 00:17:29.873 [2024-12-13 06:56:34.348108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.873 [2024-12-13 06:56:34.348160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:29.873 [2024-12-13 06:56:34.364155] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190e6fa8 00:17:29.873 [2024-12-13 06:56:34.364679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:17870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.873 [2024-12-13 06:56:34.364718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:29.873 [2024-12-13 06:56:34.382396] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190e6b70 00:17:29.873 [2024-12-13 06:56:34.383048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:6841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.873 [2024-12-13 06:56:34.383144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:30.133 [2024-12-13 06:56:34.399746] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190e6738 00:17:30.133 [2024-12-13 06:56:34.400180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:30.133 [2024-12-13 06:56:34.400219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:30.133 [2024-12-13 06:56:34.414711] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190e6300 00:17:30.133 [2024-12-13 06:56:34.415139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:30.133 [2024-12-13 06:56:34.415177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.133 [2024-12-13 06:56:34.429068] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190e5ec8 00:17:30.133 [2024-12-13 06:56:34.429507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:30.133 [2024-12-13 06:56:34.429545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:30.133 [2024-12-13 06:56:34.443527] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190e5a90 00:17:30.133 [2024-12-13 06:56:34.443956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:23235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:30.133 [2024-12-13 06:56:34.443994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:30.133 [2024-12-13 06:56:34.457879] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190e5658 00:17:30.133 [2024-12-13 06:56:34.458285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:30.133 [2024-12-13 06:56:34.458322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:30.133 [2024-12-13 06:56:34.472676] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190e5220 00:17:30.133 [2024-12-13 06:56:34.473102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:30.133 [2024-12-13 06:56:34.473138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:30.133 [2024-12-13 06:56:34.486626] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190e4de8 00:17:30.133 [2024-12-13 06:56:34.486986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:1604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:30.133 [2024-12-13 06:56:34.487018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:30.133 [2024-12-13 06:56:34.501318] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190e49b0 00:17:30.133 [2024-12-13 06:56:34.501713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:11192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:30.133 [2024-12-13 06:56:34.501751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:30.133 [2024-12-13 06:56:34.516580] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190e4578 00:17:30.133 [2024-12-13 06:56:34.516925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:30.133 [2024-12-13 06:56:34.516962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:30.133 [2024-12-13 06:56:34.531389] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190e4140 00:17:30.133 [2024-12-13 06:56:34.531730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:17962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:30.133 [2024-12-13 06:56:34.531773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:30.133 [2024-12-13 06:56:34.546537] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190e3d08 00:17:30.133 [2024-12-13 06:56:34.546865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:16248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:30.133 [2024-12-13 06:56:34.546907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:30.133 [2024-12-13 06:56:34.561354] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190e38d0 00:17:30.133 [2024-12-13 06:56:34.561685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:7279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:30.133 [2024-12-13 06:56:34.561713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:30.133 [2024-12-13 06:56:34.576148] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190e3498 00:17:30.133 [2024-12-13 06:56:34.576492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:3524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:30.133 [2024-12-13 06:56:34.576543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:30.133 [2024-12-13 06:56:34.590869] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190e3060 00:17:30.133 [2024-12-13 06:56:34.591158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:10276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:30.133 [2024-12-13 06:56:34.591181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:30.133 [2024-12-13 06:56:34.606312] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190e2c28 00:17:30.133 [2024-12-13 06:56:34.606641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:30.133 [2024-12-13 06:56:34.606668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:30.133 [2024-12-13 06:56:34.622772] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190e27f0 00:17:30.133 [2024-12-13 06:56:34.623085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:18099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:30.133 [2024-12-13 06:56:34.623111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:30.133 [2024-12-13 06:56:34.639297] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190e23b8 00:17:30.133 [2024-12-13 06:56:34.639580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:30.133 [2024-12-13 06:56:34.639606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:30.393 [2024-12-13 06:56:34.655907] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190e1f80 00:17:30.393 [2024-12-13 06:56:34.656170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:30.393 [2024-12-13 06:56:34.656226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:30.393 [2024-12-13 06:56:34.670794] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190e1b48 00:17:30.393 [2024-12-13 06:56:34.671031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:17219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:30.393 [2024-12-13 06:56:34.671057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:30.393 [2024-12-13 06:56:34.684885] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190e1710 00:17:30.393 [2024-12-13 06:56:34.685088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:30.393 [2024-12-13 06:56:34.685108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:30.393 [2024-12-13 06:56:34.698765] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190e12d8 00:17:30.393 [2024-12-13 06:56:34.698938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:30.393 [2024-12-13 06:56:34.698958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:30.393 [2024-12-13 06:56:34.712602] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190e0ea0 00:17:30.393 [2024-12-13 06:56:34.712804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:12234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:30.393 [2024-12-13 06:56:34.712826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:30.393 [2024-12-13 06:56:34.726322] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190e0a68 00:17:30.393 [2024-12-13 06:56:34.726504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:30.393 [2024-12-13 06:56:34.726524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:30.393 [2024-12-13 06:56:34.740069] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190e0630 00:17:30.393 [2024-12-13 06:56:34.740234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:3511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:30.393 [2024-12-13 06:56:34.740254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:30.393 [2024-12-13 06:56:34.753984] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190e01f8 00:17:30.393 [2024-12-13 06:56:34.754141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:30.393 [2024-12-13 06:56:34.754160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:30.393 [2024-12-13 06:56:34.767825] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190dfdc0 00:17:30.393 [2024-12-13 06:56:34.768018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:16971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:30.393 [2024-12-13 06:56:34.768039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:30.393 [2024-12-13 06:56:34.781688] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190df988 00:17:30.393 [2024-12-13 06:56:34.781829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:30.393 [2024-12-13 06:56:34.781848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:30.393 [2024-12-13 06:56:34.796333] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190df550 00:17:30.393 [2024-12-13 06:56:34.796526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:19130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:30.393 [2024-12-13 06:56:34.796547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:30.393 [2024-12-13 06:56:34.810606] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190df118 00:17:30.393 [2024-12-13 06:56:34.810746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:30.393 [2024-12-13 06:56:34.810767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:30.393 [2024-12-13 06:56:34.824601] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190dece0 00:17:30.393 [2024-12-13 06:56:34.824715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:18523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:30.393 [2024-12-13 06:56:34.824735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:30.393 [2024-12-13 06:56:34.838386] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190de8a8 00:17:30.393 [2024-12-13 06:56:34.838490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:30.393 [2024-12-13 06:56:34.838510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:30.393 [2024-12-13 06:56:34.852322] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190de038 00:17:30.393 [2024-12-13 06:56:34.852426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:30.393 [2024-12-13 06:56:34.852456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:30.393 [2024-12-13 06:56:34.873208] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190de038 00:17:30.393 [2024-12-13 06:56:34.875221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:30.393 [2024-12-13 06:56:34.875270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.393 [2024-12-13 06:56:34.888195] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190de470 00:17:30.393 [2024-12-13 06:56:34.889543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:30.393 [2024-12-13 06:56:34.889590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:30.393 [2024-12-13 06:56:34.902201] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190de8a8 00:17:30.393 [2024-12-13 06:56:34.903561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:30.393 [2024-12-13 06:56:34.903610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:30.653 [2024-12-13 06:56:34.917579] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2165d30) with pdu=0x2000190dece0 00:17:30.653 [2024-12-13 06:56:34.918915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:30.653 [2024-12-13 06:56:34.918964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:30.653 00:17:30.653 Latency(us) 00:17:30.653 [2024-12-13T06:56:35.172Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:30.653 [2024-12-13T06:56:35.172Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:30.653 nvme0n1 : 2.00 16816.61 65.69 0.00 0.00 7605.49 6285.50 21924.77 00:17:30.653 [2024-12-13T06:56:35.172Z] =================================================================================================================== 00:17:30.653 [2024-12-13T06:56:35.172Z] Total : 16816.61 65.69 0.00 0.00 7605.49 6285.50 21924.77 00:17:30.653 0 00:17:30.653 06:56:34 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:30.653 06:56:34 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:30.653 06:56:34 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:30.653 | .driver_specific 00:17:30.653 | .nvme_error 00:17:30.653 | .status_code 00:17:30.653 | .command_transient_transport_error' 00:17:30.653 06:56:34 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:30.912 06:56:35 -- host/digest.sh@71 -- # (( 131 > 0 )) 00:17:30.912 06:56:35 -- host/digest.sh@73 -- # killprocess 83846 00:17:30.912 06:56:35 -- common/autotest_common.sh@936 -- # '[' -z 83846 ']' 00:17:30.912 06:56:35 -- common/autotest_common.sh@940 -- # kill -0 83846 00:17:30.912 06:56:35 -- common/autotest_common.sh@941 -- # uname 00:17:30.912 06:56:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:30.912 06:56:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83846 00:17:30.912 06:56:35 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:30.912 06:56:35 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:30.912 killing process with pid 83846 00:17:30.912 06:56:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83846' 00:17:30.912 Received shutdown signal, test time was about 2.000000 seconds 00:17:30.912 00:17:30.912 Latency(us) 00:17:30.912 [2024-12-13T06:56:35.431Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:30.912 [2024-12-13T06:56:35.431Z] =================================================================================================================== 00:17:30.912 [2024-12-13T06:56:35.431Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:30.912 06:56:35 -- common/autotest_common.sh@955 -- # kill 83846 00:17:30.912 06:56:35 -- common/autotest_common.sh@960 -- # wait 83846 00:17:30.912 06:56:35 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:17:30.912 06:56:35 -- host/digest.sh@54 -- # local rw bs qd 00:17:30.912 06:56:35 -- host/digest.sh@56 -- # rw=randwrite 00:17:30.912 06:56:35 -- host/digest.sh@56 -- # bs=131072 00:17:30.912 06:56:35 -- host/digest.sh@56 -- # qd=16 00:17:30.912 06:56:35 -- host/digest.sh@58 -- # bperfpid=83911 00:17:30.912 06:56:35 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:17:30.912 06:56:35 -- host/digest.sh@60 -- # waitforlisten 83911 /var/tmp/bperf.sock 00:17:30.912 06:56:35 -- common/autotest_common.sh@829 -- # '[' -z 83911 ']' 00:17:30.912 06:56:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:30.912 06:56:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:30.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:30.912 06:56:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:30.912 06:56:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:30.912 06:56:35 -- common/autotest_common.sh@10 -- # set +x 00:17:30.912 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:30.912 Zero copy mechanism will not be used. 00:17:30.912 [2024-12-13 06:56:35.403716] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:30.912 [2024-12-13 06:56:35.403804] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83911 ] 00:17:31.171 [2024-12-13 06:56:35.538925] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.171 [2024-12-13 06:56:35.573257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:31.171 06:56:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:31.171 06:56:35 -- common/autotest_common.sh@862 -- # return 0 00:17:31.171 06:56:35 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:31.171 06:56:35 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:31.430 06:56:35 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:31.431 06:56:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.431 06:56:35 -- common/autotest_common.sh@10 -- # set +x 00:17:31.431 06:56:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.431 06:56:35 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:31.431 06:56:35 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:32.000 nvme0n1 00:17:32.000 06:56:36 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:17:32.000 06:56:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.000 06:56:36 -- common/autotest_common.sh@10 -- # set +x 00:17:32.000 06:56:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.000 06:56:36 -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:32.000 06:56:36 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:32.000 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:32.000 Zero copy mechanism will not be used. 00:17:32.000 Running I/O for 2 seconds... 00:17:32.000 [2024-12-13 06:56:36.377156] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.000 [2024-12-13 06:56:36.377512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.000 [2024-12-13 06:56:36.377544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:32.000 [2024-12-13 06:56:36.382631] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.000 [2024-12-13 06:56:36.382984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.000 [2024-12-13 06:56:36.383015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:32.000 [2024-12-13 06:56:36.387833] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.000 [2024-12-13 06:56:36.388213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.000 [2024-12-13 06:56:36.388253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:32.000 [2024-12-13 06:56:36.392836] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.000 [2024-12-13 06:56:36.393144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.000 [2024-12-13 06:56:36.393203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.000 [2024-12-13 06:56:36.397602] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.000 [2024-12-13 06:56:36.397882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.001 [2024-12-13 06:56:36.397909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:32.001 [2024-12-13 06:56:36.402307] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.001 [2024-12-13 06:56:36.402632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.001 [2024-12-13 06:56:36.402660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:32.001 [2024-12-13 06:56:36.407019] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.001 [2024-12-13 06:56:36.407306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.001 [2024-12-13 06:56:36.407334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:32.001 [2024-12-13 06:56:36.411763] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.001 [2024-12-13 06:56:36.412103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.001 [2024-12-13 06:56:36.412132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.001 [2024-12-13 06:56:36.416715] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.001 [2024-12-13 06:56:36.417013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.001 [2024-12-13 06:56:36.417040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:32.001 [2024-12-13 06:56:36.421434] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.001 [2024-12-13 06:56:36.421712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.001 [2024-12-13 06:56:36.421739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:32.001 [2024-12-13 06:56:36.425990] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.001 [2024-12-13 06:56:36.426266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.001 [2024-12-13 06:56:36.426292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:32.001 [2024-12-13 06:56:36.430748] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.001 [2024-12-13 06:56:36.431032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.001 [2024-12-13 06:56:36.431059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.001 [2024-12-13 06:56:36.435392] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.001 [2024-12-13 06:56:36.435686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.001 [2024-12-13 06:56:36.435714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:32.001 [2024-12-13 06:56:36.440035] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.001 [2024-12-13 06:56:36.440358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.001 [2024-12-13 06:56:36.440396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:32.001 [2024-12-13 06:56:36.444761] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.001 [2024-12-13 06:56:36.445038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.001 [2024-12-13 06:56:36.445065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:32.001 [2024-12-13 06:56:36.449408] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.001 [2024-12-13 06:56:36.449703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.001 [2024-12-13 06:56:36.449730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.001 [2024-12-13 06:56:36.454064] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.001 [2024-12-13 06:56:36.454342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.001 [2024-12-13 06:56:36.454377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:32.001 [2024-12-13 06:56:36.458683] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.001 [2024-12-13 06:56:36.458960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.001 [2024-12-13 06:56:36.458986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:32.001 [2024-12-13 06:56:36.463264] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.001 [2024-12-13 06:56:36.463573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.001 [2024-12-13 06:56:36.463600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:32.001 [2024-12-13 06:56:36.467824] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.001 [2024-12-13 06:56:36.468170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.001 [2024-12-13 06:56:36.468228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.001 [2024-12-13 06:56:36.472797] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.001 [2024-12-13 06:56:36.473091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.001 [2024-12-13 06:56:36.473119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:32.001 [2024-12-13 06:56:36.477806] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.001 [2024-12-13 06:56:36.478171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.001 [2024-12-13 06:56:36.478200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:32.001 [2024-12-13 06:56:36.483100] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.001 [2024-12-13 06:56:36.483454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.001 [2024-12-13 06:56:36.483484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:32.001 [2024-12-13 06:56:36.488531] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.001 [2024-12-13 06:56:36.488853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.001 [2024-12-13 06:56:36.488880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.001 [2024-12-13 06:56:36.493672] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.001 [2024-12-13 06:56:36.494014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.001 [2024-12-13 06:56:36.494041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:32.001 [2024-12-13 06:56:36.498813] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.001 [2024-12-13 06:56:36.499109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.001 [2024-12-13 06:56:36.499136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:32.001 [2024-12-13 06:56:36.503930] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.001 [2024-12-13 06:56:36.504255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.001 [2024-12-13 06:56:36.504282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:32.001 [2024-12-13 06:56:36.509186] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.001 [2024-12-13 06:56:36.509538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.001 [2024-12-13 06:56:36.509567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.001 [2024-12-13 06:56:36.514673] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.001 [2024-12-13 06:56:36.515031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.001 [2024-12-13 06:56:36.515061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:32.262 [2024-12-13 06:56:36.519953] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.262 [2024-12-13 06:56:36.520313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.262 [2024-12-13 06:56:36.520340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:32.262 [2024-12-13 06:56:36.525142] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.262 [2024-12-13 06:56:36.525458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.262 [2024-12-13 06:56:36.525486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:32.262 [2024-12-13 06:56:36.530115] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.262 [2024-12-13 06:56:36.530426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.262 [2024-12-13 06:56:36.530454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.262 [2024-12-13 06:56:36.534933] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.262 [2024-12-13 06:56:36.535284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.262 [2024-12-13 06:56:36.535314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:32.262 [2024-12-13 06:56:36.539802] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.262 [2024-12-13 06:56:36.540174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.262 [2024-12-13 06:56:36.540231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:32.262 [2024-12-13 06:56:36.544700] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.262 [2024-12-13 06:56:36.545018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.262 [2024-12-13 06:56:36.545045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:32.262 [2024-12-13 06:56:36.549636] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.262 [2024-12-13 06:56:36.549984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.262 [2024-12-13 06:56:36.550013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.262 [2024-12-13 06:56:36.554438] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.262 [2024-12-13 06:56:36.554730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.262 [2024-12-13 06:56:36.554757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:32.262 [2024-12-13 06:56:36.559260] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.262 [2024-12-13 06:56:36.559560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.262 [2024-12-13 06:56:36.559587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:32.262 [2024-12-13 06:56:36.563968] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.262 [2024-12-13 06:56:36.564294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.262 [2024-12-13 06:56:36.564321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:32.262 [2024-12-13 06:56:36.568985] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.262 [2024-12-13 06:56:36.569272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.262 [2024-12-13 06:56:36.569299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.262 [2024-12-13 06:56:36.573773] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.262 [2024-12-13 06:56:36.574060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.262 [2024-12-13 06:56:36.574088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:32.262 [2024-12-13 06:56:36.578667] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.262 [2024-12-13 06:56:36.578950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.262 [2024-12-13 06:56:36.578977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:32.262 [2024-12-13 06:56:36.583459] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.262 [2024-12-13 06:56:36.583764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.262 [2024-12-13 06:56:36.583793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:32.262 [2024-12-13 06:56:36.588336] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.262 [2024-12-13 06:56:36.588654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.262 [2024-12-13 06:56:36.588682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.262 [2024-12-13 06:56:36.593128] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.263 [2024-12-13 06:56:36.593447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.263 [2024-12-13 06:56:36.593474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:32.263 [2024-12-13 06:56:36.598161] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.263 [2024-12-13 06:56:36.598498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.263 [2024-12-13 06:56:36.598526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:32.263 [2024-12-13 06:56:36.603274] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.263 [2024-12-13 06:56:36.603590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.263 [2024-12-13 06:56:36.603618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:32.263 [2024-12-13 06:56:36.608086] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.263 [2024-12-13 06:56:36.608390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.263 [2024-12-13 06:56:36.608428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.263 [2024-12-13 06:56:36.612797] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.263 [2024-12-13 06:56:36.613074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.263 [2024-12-13 06:56:36.613100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:32.263 [2024-12-13 06:56:36.617548] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.263 [2024-12-13 06:56:36.617834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.263 [2024-12-13 06:56:36.617860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:32.263 [2024-12-13 06:56:36.622480] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.263 [2024-12-13 06:56:36.622783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.263 [2024-12-13 06:56:36.622811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:32.263 [2024-12-13 06:56:36.627286] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.263 [2024-12-13 06:56:36.627573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.263 [2024-12-13 06:56:36.627600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.263 [2024-12-13 06:56:36.632049] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.263 [2024-12-13 06:56:36.632365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.263 [2024-12-13 06:56:36.632409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:32.263 [2024-12-13 06:56:36.636934] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.263 [2024-12-13 06:56:36.637248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.263 [2024-12-13 06:56:36.637275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:32.263 [2024-12-13 06:56:36.641714] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.263 [2024-12-13 06:56:36.642013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.263 [2024-12-13 06:56:36.642040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:32.263 [2024-12-13 06:56:36.646565] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.263 [2024-12-13 06:56:36.646892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.263 [2024-12-13 06:56:36.646920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.263 [2024-12-13 06:56:36.651805] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.263 [2024-12-13 06:56:36.652183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.263 [2024-12-13 06:56:36.652211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:32.263 [2024-12-13 06:56:36.657112] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.263 [2024-12-13 06:56:36.657446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.263 [2024-12-13 06:56:36.657485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:32.263 [2024-12-13 06:56:36.662246] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.263 [2024-12-13 06:56:36.662556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.263 [2024-12-13 06:56:36.662584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:32.263 [2024-12-13 06:56:36.667213] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.263 [2024-12-13 06:56:36.667521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.263 [2024-12-13 06:56:36.667548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.263 [2024-12-13 06:56:36.672143] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.263 [2024-12-13 06:56:36.672481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.263 [2024-12-13 06:56:36.672508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:32.263 [2024-12-13 06:56:36.677068] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.263 [2024-12-13 06:56:36.677343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.263 [2024-12-13 06:56:36.677379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:32.263 [2024-12-13 06:56:36.681692] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.263 [2024-12-13 06:56:36.681970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.263 [2024-12-13 06:56:36.681996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:32.263 [2024-12-13 06:56:36.686291] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.263 [2024-12-13 06:56:36.686584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.263 [2024-12-13 06:56:36.686612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.263 [2024-12-13 06:56:36.690850] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.263 [2024-12-13 06:56:36.691145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.263 [2024-12-13 06:56:36.691171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:32.263 [2024-12-13 06:56:36.695493] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.263 [2024-12-13 06:56:36.695771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.263 [2024-12-13 06:56:36.695797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:32.263 [2024-12-13 06:56:36.700128] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.263 [2024-12-13 06:56:36.700454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.263 [2024-12-13 06:56:36.700496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:32.263 [2024-12-13 06:56:36.704888] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.263 [2024-12-13 06:56:36.705172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.263 [2024-12-13 06:56:36.705199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.263 [2024-12-13 06:56:36.709609] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.263 [2024-12-13 06:56:36.709891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.264 [2024-12-13 06:56:36.709918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:32.264 [2024-12-13 06:56:36.714233] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.264 [2024-12-13 06:56:36.714548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.264 [2024-12-13 06:56:36.714576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:32.264 [2024-12-13 06:56:36.718879] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.264 [2024-12-13 06:56:36.719173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.264 [2024-12-13 06:56:36.719199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:32.264 [2024-12-13 06:56:36.723486] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.264 [2024-12-13 06:56:36.723764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.264 [2024-12-13 06:56:36.723790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.264 [2024-12-13 06:56:36.728071] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.264 [2024-12-13 06:56:36.728399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.264 [2024-12-13 06:56:36.728435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:32.264 [2024-12-13 06:56:36.732792] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.264 [2024-12-13 06:56:36.733070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.264 [2024-12-13 06:56:36.733095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:32.264 [2024-12-13 06:56:36.737413] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.264 [2024-12-13 06:56:36.737691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.264 [2024-12-13 06:56:36.737717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:32.264 [2024-12-13 06:56:36.741978] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.264 [2024-12-13 06:56:36.742255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.264 [2024-12-13 06:56:36.742281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.264 [2024-12-13 06:56:36.746633] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.264 [2024-12-13 06:56:36.746930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.264 [2024-12-13 06:56:36.746956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:32.264 [2024-12-13 06:56:36.751219] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.264 [2024-12-13 06:56:36.751527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.264 [2024-12-13 06:56:36.751553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:32.264 [2024-12-13 06:56:36.755828] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.264 [2024-12-13 06:56:36.756167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.264 [2024-12-13 06:56:36.756210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:32.264 [2024-12-13 06:56:36.760643] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.264 [2024-12-13 06:56:36.760922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.264 [2024-12-13 06:56:36.760948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.264 [2024-12-13 06:56:36.765274] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.264 [2024-12-13 06:56:36.765643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.264 [2024-12-13 06:56:36.765672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:32.264 [2024-12-13 06:56:36.770563] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.264 [2024-12-13 06:56:36.770906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.264 [2024-12-13 06:56:36.770935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:32.264 [2024-12-13 06:56:36.775783] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.264 [2024-12-13 06:56:36.776147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.264 [2024-12-13 06:56:36.776177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:32.524 [2024-12-13 06:56:36.781058] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.524 [2024-12-13 06:56:36.781319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.524 [2024-12-13 06:56:36.781345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.524 [2024-12-13 06:56:36.786126] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.525 [2024-12-13 06:56:36.786489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.525 [2024-12-13 06:56:36.786527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:32.525 [2024-12-13 06:56:36.790939] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.525 [2024-12-13 06:56:36.791271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.525 [2024-12-13 06:56:36.791298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:32.525 [2024-12-13 06:56:36.795685] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.525 [2024-12-13 06:56:36.796001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.525 [2024-12-13 06:56:36.796030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:32.525 [2024-12-13 06:56:36.800373] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.525 [2024-12-13 06:56:36.800685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.525 [2024-12-13 06:56:36.800711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.525 [2024-12-13 06:56:36.804995] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.525 [2024-12-13 06:56:36.805271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.525 [2024-12-13 06:56:36.805297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:32.525 [2024-12-13 06:56:36.809957] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.525 [2024-12-13 06:56:36.810260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.525 [2024-12-13 06:56:36.810287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:32.525 [2024-12-13 06:56:36.815153] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.525 [2024-12-13 06:56:36.815516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.525 [2024-12-13 06:56:36.815545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:32.525 [2024-12-13 06:56:36.820325] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.525 [2024-12-13 06:56:36.820670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.525 [2024-12-13 06:56:36.820716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.525 [2024-12-13 06:56:36.825751] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.525 [2024-12-13 06:56:36.826147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.525 [2024-12-13 06:56:36.826174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:32.525 [2024-12-13 06:56:36.833249] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.525 [2024-12-13 06:56:36.833633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.525 [2024-12-13 06:56:36.833662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:32.525 [2024-12-13 06:56:36.838304] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.525 [2024-12-13 06:56:36.838614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.525 [2024-12-13 06:56:36.838641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:32.525 [2024-12-13 06:56:36.843368] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.525 [2024-12-13 06:56:36.843663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.525 [2024-12-13 06:56:36.843722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.525 [2024-12-13 06:56:36.848289] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.525 [2024-12-13 06:56:36.848633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.525 [2024-12-13 06:56:36.848661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:32.525 [2024-12-13 06:56:36.853575] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.525 [2024-12-13 06:56:36.853887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.525 [2024-12-13 06:56:36.853914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:32.525 [2024-12-13 06:56:36.858610] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.525 [2024-12-13 06:56:36.858939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.525 [2024-12-13 06:56:36.858967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:32.525 [2024-12-13 06:56:36.863820] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.525 [2024-12-13 06:56:36.864166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.525 [2024-12-13 06:56:36.864224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.525 [2024-12-13 06:56:36.868861] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.525 [2024-12-13 06:56:36.869173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.525 [2024-12-13 06:56:36.869216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:32.525 [2024-12-13 06:56:36.873984] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.525 [2024-12-13 06:56:36.874320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.525 [2024-12-13 06:56:36.874372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:32.525 [2024-12-13 06:56:36.878863] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.525 [2024-12-13 06:56:36.879147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.525 [2024-12-13 06:56:36.879175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:32.525 [2024-12-13 06:56:36.883764] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.525 [2024-12-13 06:56:36.884108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.525 [2024-12-13 06:56:36.884137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.525 [2024-12-13 06:56:36.888700] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.525 [2024-12-13 06:56:36.889006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.525 [2024-12-13 06:56:36.889033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:32.525 [2024-12-13 06:56:36.893542] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.525 [2024-12-13 06:56:36.893865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.525 [2024-12-13 06:56:36.893893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:32.525 [2024-12-13 06:56:36.898506] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.525 [2024-12-13 06:56:36.898801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.525 [2024-12-13 06:56:36.898829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:32.525 [2024-12-13 06:56:36.903247] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.525 [2024-12-13 06:56:36.903564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.525 [2024-12-13 06:56:36.903592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.526 [2024-12-13 06:56:36.908089] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.526 [2024-12-13 06:56:36.908445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.526 [2024-12-13 06:56:36.908471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:32.526 [2024-12-13 06:56:36.912952] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.526 [2024-12-13 06:56:36.913254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.526 [2024-12-13 06:56:36.913280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:32.526 [2024-12-13 06:56:36.918027] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.526 [2024-12-13 06:56:36.918309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.526 [2024-12-13 06:56:36.918335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:32.526 [2024-12-13 06:56:36.922840] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.526 [2024-12-13 06:56:36.923123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.526 [2024-12-13 06:56:36.923149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.526 [2024-12-13 06:56:36.927774] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.526 [2024-12-13 06:56:36.928117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.526 [2024-12-13 06:56:36.928146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:32.526 [2024-12-13 06:56:36.932641] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.526 [2024-12-13 06:56:36.932967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.526 [2024-12-13 06:56:36.932995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:32.526 [2024-12-13 06:56:36.937756] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.526 [2024-12-13 06:56:36.938041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.526 [2024-12-13 06:56:36.938068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:32.526 [2024-12-13 06:56:36.942597] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.526 [2024-12-13 06:56:36.942894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.526 [2024-12-13 06:56:36.942921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.526 [2024-12-13 06:56:36.947479] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.526 [2024-12-13 06:56:36.947759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.526 [2024-12-13 06:56:36.947785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:32.526 [2024-12-13 06:56:36.952287] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.526 [2024-12-13 06:56:36.952595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.526 [2024-12-13 06:56:36.952622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:32.526 [2024-12-13 06:56:36.956984] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.526 [2024-12-13 06:56:36.957276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.526 [2024-12-13 06:56:36.957302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:32.526 [2024-12-13 06:56:36.961823] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.526 [2024-12-13 06:56:36.962105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.526 [2024-12-13 06:56:36.962132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.526 [2024-12-13 06:56:36.966592] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.526 [2024-12-13 06:56:36.966874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.526 [2024-12-13 06:56:36.966900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:32.526 [2024-12-13 06:56:36.971286] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.526 [2024-12-13 06:56:36.971602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.526 [2024-12-13 06:56:36.971628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:32.526 [2024-12-13 06:56:36.976037] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.526 [2024-12-13 06:56:36.976381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.526 [2024-12-13 06:56:36.976417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:32.526 [2024-12-13 06:56:36.980787] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.526 [2024-12-13 06:56:36.981092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.526 [2024-12-13 06:56:36.981118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.526 [2024-12-13 06:56:36.985588] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.526 [2024-12-13 06:56:36.985874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.526 [2024-12-13 06:56:36.985901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:32.526 [2024-12-13 06:56:36.990713] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.526 [2024-12-13 06:56:36.991001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.526 [2024-12-13 06:56:36.991028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:32.526 [2024-12-13 06:56:36.995447] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.526 [2024-12-13 06:56:36.995739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.526 [2024-12-13 06:56:36.995765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:32.526 [2024-12-13 06:56:37.000123] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.526 [2024-12-13 06:56:37.000467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.526 [2024-12-13 06:56:37.000496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.526 [2024-12-13 06:56:37.004913] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.526 [2024-12-13 06:56:37.005209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.526 [2024-12-13 06:56:37.005236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:32.526 [2024-12-13 06:56:37.009612] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.526 [2024-12-13 06:56:37.009893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.526 [2024-12-13 06:56:37.009920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:32.526 [2024-12-13 06:56:37.014300] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.526 [2024-12-13 06:56:37.014592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.526 [2024-12-13 06:56:37.014619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:32.526 [2024-12-13 06:56:37.018957] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.526 [2024-12-13 06:56:37.019236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.526 [2024-12-13 06:56:37.019263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.526 [2024-12-13 06:56:37.023801] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.526 [2024-12-13 06:56:37.024152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.526 [2024-12-13 06:56:37.024180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:32.526 [2024-12-13 06:56:37.029393] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.527 [2024-12-13 06:56:37.029745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.527 [2024-12-13 06:56:37.029773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:32.527 [2024-12-13 06:56:37.034484] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.527 [2024-12-13 06:56:37.034767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.527 [2024-12-13 06:56:37.034793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:32.527 [2024-12-13 06:56:37.039376] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.527 [2024-12-13 06:56:37.039659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.527 [2024-12-13 06:56:37.039684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.787 [2024-12-13 06:56:37.044538] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.787 [2024-12-13 06:56:37.044854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.787 [2024-12-13 06:56:37.044881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:32.787 [2024-12-13 06:56:37.049646] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.787 [2024-12-13 06:56:37.049922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.787 [2024-12-13 06:56:37.049948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:32.787 [2024-12-13 06:56:37.054423] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.787 [2024-12-13 06:56:37.054698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.787 [2024-12-13 06:56:37.054725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:32.787 [2024-12-13 06:56:37.059359] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.787 [2024-12-13 06:56:37.059657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.787 [2024-12-13 06:56:37.059699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.787 [2024-12-13 06:56:37.064535] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.787 [2024-12-13 06:56:37.064854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.787 [2024-12-13 06:56:37.064885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:32.787 [2024-12-13 06:56:37.069520] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.787 [2024-12-13 06:56:37.069845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.787 [2024-12-13 06:56:37.069873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:32.787 [2024-12-13 06:56:37.074334] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.787 [2024-12-13 06:56:37.074642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.787 [2024-12-13 06:56:37.074669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:32.787 [2024-12-13 06:56:37.079035] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.787 [2024-12-13 06:56:37.079326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.787 [2024-12-13 06:56:37.079360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.787 [2024-12-13 06:56:37.083673] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.787 [2024-12-13 06:56:37.083987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.787 [2024-12-13 06:56:37.084016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:32.787 [2024-12-13 06:56:37.088449] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.787 [2024-12-13 06:56:37.088729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.788 [2024-12-13 06:56:37.088755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:32.788 [2024-12-13 06:56:37.093201] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.788 [2024-12-13 06:56:37.093516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.788 [2024-12-13 06:56:37.093545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:32.788 [2024-12-13 06:56:37.097911] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.788 [2024-12-13 06:56:37.098209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.788 [2024-12-13 06:56:37.098235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.788 [2024-12-13 06:56:37.102533] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.788 [2024-12-13 06:56:37.102825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.788 [2024-12-13 06:56:37.102852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:32.788 [2024-12-13 06:56:37.107217] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.788 [2024-12-13 06:56:37.107526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.788 [2024-12-13 06:56:37.107554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:32.788 [2024-12-13 06:56:37.111831] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.788 [2024-12-13 06:56:37.112155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.788 [2024-12-13 06:56:37.112183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:32.788 [2024-12-13 06:56:37.116554] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.788 [2024-12-13 06:56:37.116831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.788 [2024-12-13 06:56:37.116857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.788 [2024-12-13 06:56:37.121181] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.788 [2024-12-13 06:56:37.121493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.788 [2024-12-13 06:56:37.121520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:32.788 [2024-12-13 06:56:37.125912] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.788 [2024-12-13 06:56:37.126227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.788 [2024-12-13 06:56:37.126253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:32.788 [2024-12-13 06:56:37.130575] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.788 [2024-12-13 06:56:37.130853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.788 [2024-12-13 06:56:37.130874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:32.788 [2024-12-13 06:56:37.135081] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.788 [2024-12-13 06:56:37.135405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.788 [2024-12-13 06:56:37.135443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.788 [2024-12-13 06:56:37.139672] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.788 [2024-12-13 06:56:37.140002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.788 [2024-12-13 06:56:37.140030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:32.788 [2024-12-13 06:56:37.144425] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.788 [2024-12-13 06:56:37.144722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.788 [2024-12-13 06:56:37.144748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:32.788 [2024-12-13 06:56:37.149021] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.788 [2024-12-13 06:56:37.149295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.788 [2024-12-13 06:56:37.149321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:32.788 [2024-12-13 06:56:37.153761] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.788 [2024-12-13 06:56:37.154048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.788 [2024-12-13 06:56:37.154075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.788 [2024-12-13 06:56:37.158427] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.788 [2024-12-13 06:56:37.158698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.788 [2024-12-13 06:56:37.158725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:32.788 [2024-12-13 06:56:37.162997] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.788 [2024-12-13 06:56:37.163306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.788 [2024-12-13 06:56:37.163327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:32.788 [2024-12-13 06:56:37.167666] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.788 [2024-12-13 06:56:37.167989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.788 [2024-12-13 06:56:37.168017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:32.788 [2024-12-13 06:56:37.172349] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.788 [2024-12-13 06:56:37.172648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.788 [2024-12-13 06:56:37.172674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.788 [2024-12-13 06:56:37.176874] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.788 [2024-12-13 06:56:37.177149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.788 [2024-12-13 06:56:37.177174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:32.788 [2024-12-13 06:56:37.181464] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.788 [2024-12-13 06:56:37.181760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.788 [2024-12-13 06:56:37.181787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:32.788 [2024-12-13 06:56:37.186137] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.788 [2024-12-13 06:56:37.186442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.788 [2024-12-13 06:56:37.186468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:32.788 [2024-12-13 06:56:37.190914] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.788 [2024-12-13 06:56:37.191188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.788 [2024-12-13 06:56:37.191215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.788 [2024-12-13 06:56:37.195565] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.788 [2024-12-13 06:56:37.195844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.788 [2024-12-13 06:56:37.195895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:32.788 [2024-12-13 06:56:37.200114] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.788 [2024-12-13 06:56:37.200467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.788 [2024-12-13 06:56:37.200494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:32.788 [2024-12-13 06:56:37.204788] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.788 [2024-12-13 06:56:37.205064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.788 [2024-12-13 06:56:37.205091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:32.788 [2024-12-13 06:56:37.209382] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.788 [2024-12-13 06:56:37.209659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.788 [2024-12-13 06:56:37.209703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.789 [2024-12-13 06:56:37.214017] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.789 [2024-12-13 06:56:37.214313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.789 [2024-12-13 06:56:37.214339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:32.789 [2024-12-13 06:56:37.218686] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.789 [2024-12-13 06:56:37.218961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.789 [2024-12-13 06:56:37.218988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:32.789 [2024-12-13 06:56:37.223266] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.789 [2024-12-13 06:56:37.223556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.789 [2024-12-13 06:56:37.223583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:32.789 [2024-12-13 06:56:37.227795] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.789 [2024-12-13 06:56:37.228129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.789 [2024-12-13 06:56:37.228157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.789 [2024-12-13 06:56:37.232464] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.789 [2024-12-13 06:56:37.232739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.789 [2024-12-13 06:56:37.232765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:32.789 [2024-12-13 06:56:37.237019] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.789 [2024-12-13 06:56:37.237294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.789 [2024-12-13 06:56:37.237320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:32.789 [2024-12-13 06:56:37.241724] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.789 [2024-12-13 06:56:37.242008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.789 [2024-12-13 06:56:37.242035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:32.789 [2024-12-13 06:56:37.246345] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.789 [2024-12-13 06:56:37.246631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.789 [2024-12-13 06:56:37.246657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.789 [2024-12-13 06:56:37.250904] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.789 [2024-12-13 06:56:37.251181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.789 [2024-12-13 06:56:37.251207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:32.789 [2024-12-13 06:56:37.255560] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.789 [2024-12-13 06:56:37.255837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.789 [2024-12-13 06:56:37.255887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:32.789 [2024-12-13 06:56:37.260123] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.789 [2024-12-13 06:56:37.260486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.789 [2024-12-13 06:56:37.260524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:32.789 [2024-12-13 06:56:37.264915] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.789 [2024-12-13 06:56:37.265210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.789 [2024-12-13 06:56:37.265237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.789 [2024-12-13 06:56:37.269657] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.789 [2024-12-13 06:56:37.269932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.789 [2024-12-13 06:56:37.269959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:32.789 [2024-12-13 06:56:37.274227] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.789 [2024-12-13 06:56:37.274536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.789 [2024-12-13 06:56:37.274563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:32.789 [2024-12-13 06:56:37.279521] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.789 [2024-12-13 06:56:37.279898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.789 [2024-12-13 06:56:37.279928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:32.789 [2024-12-13 06:56:37.284751] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.789 [2024-12-13 06:56:37.285086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.789 [2024-12-13 06:56:37.285127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.789 [2024-12-13 06:56:37.290190] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.789 [2024-12-13 06:56:37.290541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.789 [2024-12-13 06:56:37.290568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:32.789 [2024-12-13 06:56:37.295219] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.789 [2024-12-13 06:56:37.295527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.789 [2024-12-13 06:56:37.295553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:32.789 [2024-12-13 06:56:37.299841] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:32.789 [2024-12-13 06:56:37.300194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.789 [2024-12-13 06:56:37.300250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:33.050 [2024-12-13 06:56:37.304938] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.050 [2024-12-13 06:56:37.305275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.050 [2024-12-13 06:56:37.305302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.050 [2024-12-13 06:56:37.309860] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.050 [2024-12-13 06:56:37.310149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.050 [2024-12-13 06:56:37.310190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:33.050 [2024-12-13 06:56:37.314995] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.050 [2024-12-13 06:56:37.315297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.050 [2024-12-13 06:56:37.315325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:33.050 [2024-12-13 06:56:37.321856] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.050 [2024-12-13 06:56:37.322156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.050 [2024-12-13 06:56:37.322183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:33.050 [2024-12-13 06:56:37.327067] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.050 [2024-12-13 06:56:37.327348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.050 [2024-12-13 06:56:37.327396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.050 [2024-12-13 06:56:37.331742] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.050 [2024-12-13 06:56:37.332099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.050 [2024-12-13 06:56:37.332129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:33.050 [2024-12-13 06:56:37.336674] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.050 [2024-12-13 06:56:37.336992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.050 [2024-12-13 06:56:37.337019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:33.050 [2024-12-13 06:56:37.341543] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.050 [2024-12-13 06:56:37.341830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.050 [2024-12-13 06:56:37.341857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:33.050 [2024-12-13 06:56:37.346228] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.050 [2024-12-13 06:56:37.346537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.050 [2024-12-13 06:56:37.346563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.050 [2024-12-13 06:56:37.351017] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.050 [2024-12-13 06:56:37.351294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.050 [2024-12-13 06:56:37.351320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:33.050 [2024-12-13 06:56:37.355707] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.050 [2024-12-13 06:56:37.356028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.050 [2024-12-13 06:56:37.356057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:33.050 [2024-12-13 06:56:37.360320] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.050 [2024-12-13 06:56:37.360657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.050 [2024-12-13 06:56:37.360700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:33.050 [2024-12-13 06:56:37.365079] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.050 [2024-12-13 06:56:37.365373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.050 [2024-12-13 06:56:37.365408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.050 [2024-12-13 06:56:37.369770] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.050 [2024-12-13 06:56:37.370046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.050 [2024-12-13 06:56:37.370072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:33.050 [2024-12-13 06:56:37.374894] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.050 [2024-12-13 06:56:37.375188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.050 [2024-12-13 06:56:37.375216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:33.050 [2024-12-13 06:56:37.379912] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.050 [2024-12-13 06:56:37.380257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.050 [2024-12-13 06:56:37.380294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:33.050 [2024-12-13 06:56:37.385319] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.050 [2024-12-13 06:56:37.385694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.050 [2024-12-13 06:56:37.385725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.050 [2024-12-13 06:56:37.390842] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.050 [2024-12-13 06:56:37.391105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.050 [2024-12-13 06:56:37.391131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:33.050 [2024-12-13 06:56:37.396038] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.050 [2024-12-13 06:56:37.396427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.050 [2024-12-13 06:56:37.396476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:33.050 [2024-12-13 06:56:37.401299] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.050 [2024-12-13 06:56:37.401661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.050 [2024-12-13 06:56:37.401694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:33.050 [2024-12-13 06:56:37.406556] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.050 [2024-12-13 06:56:37.406901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.050 [2024-12-13 06:56:37.406944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.050 [2024-12-13 06:56:37.411655] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.050 [2024-12-13 06:56:37.412031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.050 [2024-12-13 06:56:37.412060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:33.051 [2024-12-13 06:56:37.416734] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.051 [2024-12-13 06:56:37.417022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.051 [2024-12-13 06:56:37.417049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:33.051 [2024-12-13 06:56:37.421528] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.051 [2024-12-13 06:56:37.421823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.051 [2024-12-13 06:56:37.421849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:33.051 [2024-12-13 06:56:37.426223] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.051 [2024-12-13 06:56:37.426536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.051 [2024-12-13 06:56:37.426563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.051 [2024-12-13 06:56:37.430961] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.051 [2024-12-13 06:56:37.431238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.051 [2024-12-13 06:56:37.431264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:33.051 [2024-12-13 06:56:37.435620] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.051 [2024-12-13 06:56:37.435938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.051 [2024-12-13 06:56:37.435965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:33.051 [2024-12-13 06:56:37.440341] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.051 [2024-12-13 06:56:37.440643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.051 [2024-12-13 06:56:37.440669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:33.051 [2024-12-13 06:56:37.444995] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.051 [2024-12-13 06:56:37.445284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.051 [2024-12-13 06:56:37.445311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.051 [2024-12-13 06:56:37.449802] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.051 [2024-12-13 06:56:37.450083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.051 [2024-12-13 06:56:37.450110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:33.051 [2024-12-13 06:56:37.454688] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.051 [2024-12-13 06:56:37.454978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.051 [2024-12-13 06:56:37.455007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:33.051 [2024-12-13 06:56:37.459325] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.051 [2024-12-13 06:56:37.459614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.051 [2024-12-13 06:56:37.459641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:33.051 [2024-12-13 06:56:37.464021] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.051 [2024-12-13 06:56:37.464356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.051 [2024-12-13 06:56:37.464391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.051 [2024-12-13 06:56:37.468769] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.051 [2024-12-13 06:56:37.469054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.051 [2024-12-13 06:56:37.469081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:33.051 [2024-12-13 06:56:37.473588] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.051 [2024-12-13 06:56:37.473894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.051 [2024-12-13 06:56:37.473921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:33.051 [2024-12-13 06:56:37.478309] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.051 [2024-12-13 06:56:37.478604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.051 [2024-12-13 06:56:37.478631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:33.051 [2024-12-13 06:56:37.482911] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.051 [2024-12-13 06:56:37.483186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.051 [2024-12-13 06:56:37.483213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.051 [2024-12-13 06:56:37.487571] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.051 [2024-12-13 06:56:37.487850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.051 [2024-12-13 06:56:37.487917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:33.051 [2024-12-13 06:56:37.492267] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.051 [2024-12-13 06:56:37.492556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.051 [2024-12-13 06:56:37.492582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:33.051 [2024-12-13 06:56:37.496892] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.051 [2024-12-13 06:56:37.497188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.051 [2024-12-13 06:56:37.497215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:33.051 [2024-12-13 06:56:37.501643] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.051 [2024-12-13 06:56:37.501945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.051 [2024-12-13 06:56:37.501971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.051 [2024-12-13 06:56:37.506295] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.051 [2024-12-13 06:56:37.506587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.051 [2024-12-13 06:56:37.506613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:33.051 [2024-12-13 06:56:37.511000] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.051 [2024-12-13 06:56:37.511275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.051 [2024-12-13 06:56:37.511301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:33.051 [2024-12-13 06:56:37.515730] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.051 [2024-12-13 06:56:37.516073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.051 [2024-12-13 06:56:37.516102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:33.051 [2024-12-13 06:56:37.520753] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.051 [2024-12-13 06:56:37.521081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.051 [2024-12-13 06:56:37.521107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.051 [2024-12-13 06:56:37.526122] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.051 [2024-12-13 06:56:37.526435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.051 [2024-12-13 06:56:37.526476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:33.051 [2024-12-13 06:56:37.531212] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.051 [2024-12-13 06:56:37.531520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.051 [2024-12-13 06:56:37.531547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:33.051 [2024-12-13 06:56:37.536298] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.051 [2024-12-13 06:56:37.536616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.051 [2024-12-13 06:56:37.536644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:33.051 [2024-12-13 06:56:37.541315] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.052 [2024-12-13 06:56:37.541645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.052 [2024-12-13 06:56:37.541674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.052 [2024-12-13 06:56:37.546673] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.052 [2024-12-13 06:56:37.546981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.052 [2024-12-13 06:56:37.547008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:33.052 [2024-12-13 06:56:37.552148] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.052 [2024-12-13 06:56:37.552488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.052 [2024-12-13 06:56:37.552516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:33.052 [2024-12-13 06:56:37.556871] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.052 [2024-12-13 06:56:37.557148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.052 [2024-12-13 06:56:37.557174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:33.052 [2024-12-13 06:56:37.561674] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.052 [2024-12-13 06:56:37.561965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.052 [2024-12-13 06:56:37.561992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.052 [2024-12-13 06:56:37.566849] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.052 [2024-12-13 06:56:37.567160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.052 [2024-12-13 06:56:37.567187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:33.313 [2024-12-13 06:56:37.571728] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.313 [2024-12-13 06:56:37.572056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.313 [2024-12-13 06:56:37.572086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:33.313 [2024-12-13 06:56:37.576882] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.313 [2024-12-13 06:56:37.577163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.313 [2024-12-13 06:56:37.577190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:33.313 [2024-12-13 06:56:37.581663] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.313 [2024-12-13 06:56:37.581964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.313 [2024-12-13 06:56:37.581991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.313 [2024-12-13 06:56:37.586436] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.313 [2024-12-13 06:56:37.586752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.314 [2024-12-13 06:56:37.586780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:33.314 [2024-12-13 06:56:37.591263] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.314 [2024-12-13 06:56:37.591562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.314 [2024-12-13 06:56:37.591589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:33.314 [2024-12-13 06:56:37.596019] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.314 [2024-12-13 06:56:37.596372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.314 [2024-12-13 06:56:37.596407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:33.314 [2024-12-13 06:56:37.601033] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.314 [2024-12-13 06:56:37.601321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.314 [2024-12-13 06:56:37.601358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.314 [2024-12-13 06:56:37.606194] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.314 [2024-12-13 06:56:37.606555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.314 [2024-12-13 06:56:37.606585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:33.314 [2024-12-13 06:56:37.611290] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.314 [2024-12-13 06:56:37.611650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.314 [2024-12-13 06:56:37.611679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:33.314 [2024-12-13 06:56:37.616350] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.314 [2024-12-13 06:56:37.616684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.314 [2024-12-13 06:56:37.616741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:33.314 [2024-12-13 06:56:37.621519] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.314 [2024-12-13 06:56:37.621882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.314 [2024-12-13 06:56:37.621909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.314 [2024-12-13 06:56:37.626668] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.314 [2024-12-13 06:56:37.627011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.314 [2024-12-13 06:56:37.627038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:33.314 [2024-12-13 06:56:37.631710] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.314 [2024-12-13 06:56:37.632085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.314 [2024-12-13 06:56:37.632114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:33.314 [2024-12-13 06:56:37.636842] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.314 [2024-12-13 06:56:37.637128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.314 [2024-12-13 06:56:37.637155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:33.314 [2024-12-13 06:56:37.641637] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.314 [2024-12-13 06:56:37.641925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.314 [2024-12-13 06:56:37.641952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.314 [2024-12-13 06:56:37.646637] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.314 [2024-12-13 06:56:37.646948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.314 [2024-12-13 06:56:37.646975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:33.314 [2024-12-13 06:56:37.651453] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.314 [2024-12-13 06:56:37.651761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.314 [2024-12-13 06:56:37.651800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:33.314 [2024-12-13 06:56:37.656297] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.314 [2024-12-13 06:56:37.656610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.314 [2024-12-13 06:56:37.656638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:33.314 [2024-12-13 06:56:37.661414] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.314 [2024-12-13 06:56:37.661720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.314 [2024-12-13 06:56:37.661749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.314 [2024-12-13 06:56:37.666403] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.314 [2024-12-13 06:56:37.666747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.314 [2024-12-13 06:56:37.666776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:33.314 [2024-12-13 06:56:37.671762] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.314 [2024-12-13 06:56:37.672076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.314 [2024-12-13 06:56:37.672105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:33.314 [2024-12-13 06:56:37.676969] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.314 [2024-12-13 06:56:37.677298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.314 [2024-12-13 06:56:37.677325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:33.314 [2024-12-13 06:56:37.682263] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.314 [2024-12-13 06:56:37.682561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.314 [2024-12-13 06:56:37.682588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.314 [2024-12-13 06:56:37.687454] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.314 [2024-12-13 06:56:37.687775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.314 [2024-12-13 06:56:37.687803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:33.314 [2024-12-13 06:56:37.692613] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.314 [2024-12-13 06:56:37.692962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.314 [2024-12-13 06:56:37.692989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:33.314 [2024-12-13 06:56:37.697674] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.314 [2024-12-13 06:56:37.697961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.314 [2024-12-13 06:56:37.697988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:33.314 [2024-12-13 06:56:37.702542] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.314 [2024-12-13 06:56:37.702842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.314 [2024-12-13 06:56:37.702868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.314 [2024-12-13 06:56:37.707276] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.314 [2024-12-13 06:56:37.707579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.314 [2024-12-13 06:56:37.707606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:33.314 [2024-12-13 06:56:37.712297] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.314 [2024-12-13 06:56:37.712623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.314 [2024-12-13 06:56:37.712650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:33.314 [2024-12-13 06:56:37.717126] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.314 [2024-12-13 06:56:37.717438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.315 [2024-12-13 06:56:37.717464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:33.315 [2024-12-13 06:56:37.722157] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.315 [2024-12-13 06:56:37.722488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.315 [2024-12-13 06:56:37.722524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.315 [2024-12-13 06:56:37.727140] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.315 [2024-12-13 06:56:37.727440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.315 [2024-12-13 06:56:37.727469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:33.315 [2024-12-13 06:56:37.732002] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.315 [2024-12-13 06:56:37.732336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.315 [2024-12-13 06:56:37.732383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:33.315 [2024-12-13 06:56:37.737157] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.315 [2024-12-13 06:56:37.737495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.315 [2024-12-13 06:56:37.737539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:33.315 [2024-12-13 06:56:37.742479] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.315 [2024-12-13 06:56:37.742794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.315 [2024-12-13 06:56:37.742823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.315 [2024-12-13 06:56:37.747829] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.315 [2024-12-13 06:56:37.748143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.315 [2024-12-13 06:56:37.748201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:33.315 [2024-12-13 06:56:37.753222] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.315 [2024-12-13 06:56:37.753582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.315 [2024-12-13 06:56:37.753610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:33.315 [2024-12-13 06:56:37.758502] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.315 [2024-12-13 06:56:37.758827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.315 [2024-12-13 06:56:37.758857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:33.315 [2024-12-13 06:56:37.763795] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.315 [2024-12-13 06:56:37.764126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.315 [2024-12-13 06:56:37.764169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.315 [2024-12-13 06:56:37.768976] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.315 [2024-12-13 06:56:37.769290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.315 [2024-12-13 06:56:37.769331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:33.315 [2024-12-13 06:56:37.774201] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.315 [2024-12-13 06:56:37.774518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.315 [2024-12-13 06:56:37.774546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:33.315 [2024-12-13 06:56:37.779326] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.315 [2024-12-13 06:56:37.779654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.315 [2024-12-13 06:56:37.779712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:33.315 [2024-12-13 06:56:37.784520] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.315 [2024-12-13 06:56:37.784851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.315 [2024-12-13 06:56:37.784882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.315 [2024-12-13 06:56:37.789927] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.315 [2024-12-13 06:56:37.790272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.315 [2024-12-13 06:56:37.790299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:33.315 [2024-12-13 06:56:37.795221] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.315 [2024-12-13 06:56:37.795530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.315 [2024-12-13 06:56:37.795557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:33.315 [2024-12-13 06:56:37.800450] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.315 [2024-12-13 06:56:37.800773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.315 [2024-12-13 06:56:37.800802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:33.315 [2024-12-13 06:56:37.806972] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.315 [2024-12-13 06:56:37.807363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.315 [2024-12-13 06:56:37.807406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.315 [2024-12-13 06:56:37.814091] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.315 [2024-12-13 06:56:37.814370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.315 [2024-12-13 06:56:37.814422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:33.315 [2024-12-13 06:56:37.819083] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.315 [2024-12-13 06:56:37.819361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.315 [2024-12-13 06:56:37.819397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:33.315 [2024-12-13 06:56:37.823846] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.315 [2024-12-13 06:56:37.824245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.315 [2024-12-13 06:56:37.824300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:33.315 [2024-12-13 06:56:37.829092] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.315 [2024-12-13 06:56:37.829397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.315 [2024-12-13 06:56:37.829444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.576 [2024-12-13 06:56:37.834109] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.576 [2024-12-13 06:56:37.834410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.576 [2024-12-13 06:56:37.834437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:33.576 [2024-12-13 06:56:37.839084] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.576 [2024-12-13 06:56:37.839373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.576 [2024-12-13 06:56:37.839399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:33.577 [2024-12-13 06:56:37.843741] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.577 [2024-12-13 06:56:37.844086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.577 [2024-12-13 06:56:37.844114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:33.577 [2024-12-13 06:56:37.848545] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.577 [2024-12-13 06:56:37.848844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.577 [2024-12-13 06:56:37.848870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.577 [2024-12-13 06:56:37.853296] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.577 [2024-12-13 06:56:37.853613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.577 [2024-12-13 06:56:37.853640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:33.577 [2024-12-13 06:56:37.857904] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.577 [2024-12-13 06:56:37.858181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.577 [2024-12-13 06:56:37.858207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:33.577 [2024-12-13 06:56:37.862764] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.577 [2024-12-13 06:56:37.863040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.577 [2024-12-13 06:56:37.863067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:33.577 [2024-12-13 06:56:37.867513] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.577 [2024-12-13 06:56:37.867817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.577 [2024-12-13 06:56:37.867845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.577 [2024-12-13 06:56:37.872424] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.577 [2024-12-13 06:56:37.872745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.577 [2024-12-13 06:56:37.872771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:33.577 [2024-12-13 06:56:37.877075] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.577 [2024-12-13 06:56:37.877356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.577 [2024-12-13 06:56:37.877391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:33.577 [2024-12-13 06:56:37.881814] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.577 [2024-12-13 06:56:37.882094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.577 [2024-12-13 06:56:37.882122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:33.577 [2024-12-13 06:56:37.886582] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.577 [2024-12-13 06:56:37.886862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.577 [2024-12-13 06:56:37.886889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.577 [2024-12-13 06:56:37.891201] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.577 [2024-12-13 06:56:37.891508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.577 [2024-12-13 06:56:37.891535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:33.577 [2024-12-13 06:56:37.895840] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.577 [2024-12-13 06:56:37.896184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.577 [2024-12-13 06:56:37.896226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:33.577 [2024-12-13 06:56:37.900615] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.577 [2024-12-13 06:56:37.900913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.577 [2024-12-13 06:56:37.900939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:33.577 [2024-12-13 06:56:37.905202] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.577 [2024-12-13 06:56:37.905510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.577 [2024-12-13 06:56:37.905537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.577 [2024-12-13 06:56:37.909884] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.577 [2024-12-13 06:56:37.910166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.577 [2024-12-13 06:56:37.910192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:33.577 [2024-12-13 06:56:37.914617] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.577 [2024-12-13 06:56:37.914905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.577 [2024-12-13 06:56:37.914931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:33.577 [2024-12-13 06:56:37.919423] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.577 [2024-12-13 06:56:37.919717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.577 [2024-12-13 06:56:37.919743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:33.577 [2024-12-13 06:56:37.924059] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.577 [2024-12-13 06:56:37.924390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.577 [2024-12-13 06:56:37.924442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.577 [2024-12-13 06:56:37.928831] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.577 [2024-12-13 06:56:37.929113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.577 [2024-12-13 06:56:37.929139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:33.577 [2024-12-13 06:56:37.933624] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.577 [2024-12-13 06:56:37.933933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.577 [2024-12-13 06:56:37.933959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:33.577 [2024-12-13 06:56:37.938323] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.577 [2024-12-13 06:56:37.938645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.577 [2024-12-13 06:56:37.938672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:33.577 [2024-12-13 06:56:37.943262] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.577 [2024-12-13 06:56:37.943575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.577 [2024-12-13 06:56:37.943602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.577 [2024-12-13 06:56:37.948405] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.577 [2024-12-13 06:56:37.948780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.577 [2024-12-13 06:56:37.948809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:33.577 [2024-12-13 06:56:37.953596] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.577 [2024-12-13 06:56:37.953964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.577 [2024-12-13 06:56:37.954007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:33.577 [2024-12-13 06:56:37.958809] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.577 [2024-12-13 06:56:37.959141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.577 [2024-12-13 06:56:37.959168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:33.577 [2024-12-13 06:56:37.963777] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.577 [2024-12-13 06:56:37.964163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.577 [2024-12-13 06:56:37.964202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.578 [2024-12-13 06:56:37.968824] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.578 [2024-12-13 06:56:37.969126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.578 [2024-12-13 06:56:37.969152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:33.578 [2024-12-13 06:56:37.973999] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.578 [2024-12-13 06:56:37.974286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.578 [2024-12-13 06:56:37.974313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:33.578 [2024-12-13 06:56:37.978817] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.578 [2024-12-13 06:56:37.979101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.578 [2024-12-13 06:56:37.979128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:33.578 [2024-12-13 06:56:37.983755] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.578 [2024-12-13 06:56:37.984067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.578 [2024-12-13 06:56:37.984096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.578 [2024-12-13 06:56:37.988509] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.578 [2024-12-13 06:56:37.988841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.578 [2024-12-13 06:56:37.988867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:33.578 [2024-12-13 06:56:37.993356] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.578 [2024-12-13 06:56:37.993685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.578 [2024-12-13 06:56:37.993712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:33.578 [2024-12-13 06:56:37.998333] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.578 [2024-12-13 06:56:37.998635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.578 [2024-12-13 06:56:37.998663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:33.578 [2024-12-13 06:56:38.003163] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.578 [2024-12-13 06:56:38.003501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.578 [2024-12-13 06:56:38.003529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.578 [2024-12-13 06:56:38.008039] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.578 [2024-12-13 06:56:38.008378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.578 [2024-12-13 06:56:38.008415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:33.578 [2024-12-13 06:56:38.013253] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.578 [2024-12-13 06:56:38.013551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.578 [2024-12-13 06:56:38.013577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:33.578 [2024-12-13 06:56:38.018387] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.578 [2024-12-13 06:56:38.018698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.578 [2024-12-13 06:56:38.018741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:33.578 [2024-12-13 06:56:38.023245] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.578 [2024-12-13 06:56:38.023562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.578 [2024-12-13 06:56:38.023590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.578 [2024-12-13 06:56:38.028405] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.578 [2024-12-13 06:56:38.028727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.578 [2024-12-13 06:56:38.028755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:33.578 [2024-12-13 06:56:38.033161] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.578 [2024-12-13 06:56:38.033476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.578 [2024-12-13 06:56:38.033503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:33.578 [2024-12-13 06:56:38.038081] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.578 [2024-12-13 06:56:38.038383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.578 [2024-12-13 06:56:38.038424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:33.578 [2024-12-13 06:56:38.042777] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.578 [2024-12-13 06:56:38.043059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.578 [2024-12-13 06:56:38.043086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.578 [2024-12-13 06:56:38.047787] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.578 [2024-12-13 06:56:38.048146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.578 [2024-12-13 06:56:38.048192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:33.578 [2024-12-13 06:56:38.052631] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.578 [2024-12-13 06:56:38.052940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.578 [2024-12-13 06:56:38.052967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:33.578 [2024-12-13 06:56:38.057597] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.578 [2024-12-13 06:56:38.057914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.578 [2024-12-13 06:56:38.057940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:33.578 [2024-12-13 06:56:38.063053] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.578 [2024-12-13 06:56:38.063402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.578 [2024-12-13 06:56:38.063441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.578 [2024-12-13 06:56:38.068731] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.578 [2024-12-13 06:56:38.069039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.578 [2024-12-13 06:56:38.069068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:33.578 [2024-12-13 06:56:38.073786] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.578 [2024-12-13 06:56:38.074068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.578 [2024-12-13 06:56:38.074095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:33.578 [2024-12-13 06:56:38.078548] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.578 [2024-12-13 06:56:38.078836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.578 [2024-12-13 06:56:38.078864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:33.578 [2024-12-13 06:56:38.083293] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.578 [2024-12-13 06:56:38.083616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.578 [2024-12-13 06:56:38.083644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.578 [2024-12-13 06:56:38.088090] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.578 [2024-12-13 06:56:38.088435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.578 [2024-12-13 06:56:38.088460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:33.578 [2024-12-13 06:56:38.093232] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.578 [2024-12-13 06:56:38.093594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.578 [2024-12-13 06:56:38.093623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:33.839 [2024-12-13 06:56:38.098137] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.839 [2024-12-13 06:56:38.098461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.839 [2024-12-13 06:56:38.098489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:33.839 [2024-12-13 06:56:38.103201] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.839 [2024-12-13 06:56:38.103498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.839 [2024-12-13 06:56:38.103526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.839 [2024-12-13 06:56:38.107806] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.839 [2024-12-13 06:56:38.108125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.839 [2024-12-13 06:56:38.108152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:33.839 [2024-12-13 06:56:38.112523] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.839 [2024-12-13 06:56:38.112820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.839 [2024-12-13 06:56:38.112847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:33.839 [2024-12-13 06:56:38.117102] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.839 [2024-12-13 06:56:38.117381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.839 [2024-12-13 06:56:38.117417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:33.839 [2024-12-13 06:56:38.121715] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.839 [2024-12-13 06:56:38.121991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.839 [2024-12-13 06:56:38.122017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.839 [2024-12-13 06:56:38.126396] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.839 [2024-12-13 06:56:38.126673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.839 [2024-12-13 06:56:38.126700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:33.839 [2024-12-13 06:56:38.131009] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.839 [2024-12-13 06:56:38.131291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.839 [2024-12-13 06:56:38.131317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:33.839 [2024-12-13 06:56:38.135658] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.839 [2024-12-13 06:56:38.135990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.839 [2024-12-13 06:56:38.136019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:33.839 [2024-12-13 06:56:38.140355] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.839 [2024-12-13 06:56:38.140652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.839 [2024-12-13 06:56:38.140689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.839 [2024-12-13 06:56:38.145031] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.839 [2024-12-13 06:56:38.145356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.839 [2024-12-13 06:56:38.145377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:33.839 [2024-12-13 06:56:38.149697] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.839 [2024-12-13 06:56:38.149973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.839 [2024-12-13 06:56:38.149999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:33.839 [2024-12-13 06:56:38.154372] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.839 [2024-12-13 06:56:38.154663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.839 [2024-12-13 06:56:38.154689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:33.839 [2024-12-13 06:56:38.158969] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.839 [2024-12-13 06:56:38.159244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.839 [2024-12-13 06:56:38.159270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.839 [2024-12-13 06:56:38.163692] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.840 [2024-12-13 06:56:38.164026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.840 [2024-12-13 06:56:38.164054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:33.840 [2024-12-13 06:56:38.168313] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.840 [2024-12-13 06:56:38.168615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.840 [2024-12-13 06:56:38.168641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:33.840 [2024-12-13 06:56:38.172896] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.840 [2024-12-13 06:56:38.173226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.840 [2024-12-13 06:56:38.173253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:33.840 [2024-12-13 06:56:38.177579] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.840 [2024-12-13 06:56:38.177859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.840 [2024-12-13 06:56:38.177886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.840 [2024-12-13 06:56:38.182121] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.840 [2024-12-13 06:56:38.182423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.840 [2024-12-13 06:56:38.182451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:33.840 [2024-12-13 06:56:38.186756] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.840 [2024-12-13 06:56:38.187035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.840 [2024-12-13 06:56:38.187062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:33.840 [2024-12-13 06:56:38.191317] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.840 [2024-12-13 06:56:38.191607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.840 [2024-12-13 06:56:38.191634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:33.840 [2024-12-13 06:56:38.195952] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.840 [2024-12-13 06:56:38.196278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.840 [2024-12-13 06:56:38.196305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.840 [2024-12-13 06:56:38.200603] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.840 [2024-12-13 06:56:38.200905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.840 [2024-12-13 06:56:38.200931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:33.840 [2024-12-13 06:56:38.205322] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.840 [2024-12-13 06:56:38.205615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.840 [2024-12-13 06:56:38.205642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:33.840 [2024-12-13 06:56:38.209832] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.840 [2024-12-13 06:56:38.210107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.840 [2024-12-13 06:56:38.210133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:33.840 [2024-12-13 06:56:38.214663] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.840 [2024-12-13 06:56:38.214941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.840 [2024-12-13 06:56:38.214967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.840 [2024-12-13 06:56:38.219574] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.840 [2024-12-13 06:56:38.219920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.840 [2024-12-13 06:56:38.219948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:33.840 [2024-12-13 06:56:38.224489] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.840 [2024-12-13 06:56:38.224770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.840 [2024-12-13 06:56:38.224796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:33.840 [2024-12-13 06:56:38.229052] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.840 [2024-12-13 06:56:38.229331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.840 [2024-12-13 06:56:38.229365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:33.840 [2024-12-13 06:56:38.234319] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.840 [2024-12-13 06:56:38.234657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.840 [2024-12-13 06:56:38.234717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.840 [2024-12-13 06:56:38.239393] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.840 [2024-12-13 06:56:38.239696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.840 [2024-12-13 06:56:38.239724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:33.840 [2024-12-13 06:56:38.244245] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.840 [2024-12-13 06:56:38.244580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.840 [2024-12-13 06:56:38.244608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:33.840 [2024-12-13 06:56:38.249271] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.840 [2024-12-13 06:56:38.249587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.840 [2024-12-13 06:56:38.249614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:33.840 [2024-12-13 06:56:38.254046] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.840 [2024-12-13 06:56:38.254329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.840 [2024-12-13 06:56:38.254378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.840 [2024-12-13 06:56:38.258705] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.840 [2024-12-13 06:56:38.258981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.840 [2024-12-13 06:56:38.259007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:33.840 [2024-12-13 06:56:38.263480] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.840 [2024-12-13 06:56:38.263776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.840 [2024-12-13 06:56:38.263803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:33.840 [2024-12-13 06:56:38.268241] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.840 [2024-12-13 06:56:38.268570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.840 [2024-12-13 06:56:38.268598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:33.840 [2024-12-13 06:56:38.273036] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.840 [2024-12-13 06:56:38.273317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.840 [2024-12-13 06:56:38.273344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.840 [2024-12-13 06:56:38.277777] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.840 [2024-12-13 06:56:38.278058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.840 [2024-12-13 06:56:38.278085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:33.840 [2024-12-13 06:56:38.282441] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.840 [2024-12-13 06:56:38.282717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.840 [2024-12-13 06:56:38.282743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:33.840 [2024-12-13 06:56:38.287038] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.840 [2024-12-13 06:56:38.287313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.841 [2024-12-13 06:56:38.287340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:33.841 [2024-12-13 06:56:38.291686] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.841 [2024-12-13 06:56:38.292024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.841 [2024-12-13 06:56:38.292067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.841 [2024-12-13 06:56:38.297981] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.841 [2024-12-13 06:56:38.298283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.841 [2024-12-13 06:56:38.298310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:33.841 [2024-12-13 06:56:38.303980] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.841 [2024-12-13 06:56:38.304330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.841 [2024-12-13 06:56:38.304377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:33.841 [2024-12-13 06:56:38.308711] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.841 [2024-12-13 06:56:38.309004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.841 [2024-12-13 06:56:38.309030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:33.841 [2024-12-13 06:56:38.313453] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.841 [2024-12-13 06:56:38.313729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.841 [2024-12-13 06:56:38.313756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.841 [2024-12-13 06:56:38.318109] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.841 [2024-12-13 06:56:38.318395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.841 [2024-12-13 06:56:38.318420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:33.841 [2024-12-13 06:56:38.323037] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.841 [2024-12-13 06:56:38.323316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.841 [2024-12-13 06:56:38.323342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:33.841 [2024-12-13 06:56:38.328230] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.841 [2024-12-13 06:56:38.328596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.841 [2024-12-13 06:56:38.328625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:33.841 [2024-12-13 06:56:38.333094] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.841 [2024-12-13 06:56:38.333388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.841 [2024-12-13 06:56:38.333413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.841 [2024-12-13 06:56:38.337745] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.841 [2024-12-13 06:56:38.338019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.841 [2024-12-13 06:56:38.338047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:33.841 [2024-12-13 06:56:38.342437] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.841 [2024-12-13 06:56:38.342713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.841 [2024-12-13 06:56:38.342740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:33.841 [2024-12-13 06:56:38.347095] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.841 [2024-12-13 06:56:38.347384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.841 [2024-12-13 06:56:38.347410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:33.841 [2024-12-13 06:56:38.351828] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:33.841 [2024-12-13 06:56:38.352189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.841 [2024-12-13 06:56:38.352249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.100 [2024-12-13 06:56:38.357186] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:34.100 [2024-12-13 06:56:38.357537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.100 [2024-12-13 06:56:38.357566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:34.100 [2024-12-13 06:56:38.362142] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:34.100 [2024-12-13 06:56:38.362518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.100 [2024-12-13 06:56:38.362556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:34.100 [2024-12-13 06:56:38.367039] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2164860) with pdu=0x2000190fef90 00:17:34.100 [2024-12-13 06:56:38.367315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.100 [2024-12-13 06:56:38.367341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:34.100 00:17:34.100 Latency(us) 00:17:34.100 [2024-12-13T06:56:38.619Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:34.100 [2024-12-13T06:56:38.619Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:17:34.100 nvme0n1 : 2.00 6309.93 788.74 0.00 0.00 2530.47 2025.66 7864.32 00:17:34.100 [2024-12-13T06:56:38.619Z] =================================================================================================================== 00:17:34.100 [2024-12-13T06:56:38.619Z] Total : 6309.93 788.74 0.00 0.00 2530.47 2025.66 7864.32 00:17:34.100 0 00:17:34.100 06:56:38 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:34.100 06:56:38 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:34.100 06:56:38 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:34.101 06:56:38 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:34.101 | .driver_specific 00:17:34.101 | .nvme_error 00:17:34.101 | .status_code 00:17:34.101 | .command_transient_transport_error' 00:17:34.366 06:56:38 -- host/digest.sh@71 -- # (( 407 > 0 )) 00:17:34.366 06:56:38 -- host/digest.sh@73 -- # killprocess 83911 00:17:34.366 06:56:38 -- common/autotest_common.sh@936 -- # '[' -z 83911 ']' 00:17:34.366 06:56:38 -- common/autotest_common.sh@940 -- # kill -0 83911 00:17:34.366 06:56:38 -- common/autotest_common.sh@941 -- # uname 00:17:34.366 06:56:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:34.366 06:56:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83911 00:17:34.366 06:56:38 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:34.366 06:56:38 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:34.366 killing process with pid 83911 00:17:34.366 06:56:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83911' 00:17:34.366 Received shutdown signal, test time was about 2.000000 seconds 00:17:34.366 00:17:34.366 Latency(us) 00:17:34.366 [2024-12-13T06:56:38.885Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:34.366 [2024-12-13T06:56:38.885Z] =================================================================================================================== 00:17:34.366 [2024-12-13T06:56:38.885Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:34.366 06:56:38 -- common/autotest_common.sh@955 -- # kill 83911 00:17:34.366 06:56:38 -- common/autotest_common.sh@960 -- # wait 83911 00:17:34.366 06:56:38 -- host/digest.sh@115 -- # killprocess 83719 00:17:34.366 06:56:38 -- common/autotest_common.sh@936 -- # '[' -z 83719 ']' 00:17:34.366 06:56:38 -- common/autotest_common.sh@940 -- # kill -0 83719 00:17:34.366 06:56:38 -- common/autotest_common.sh@941 -- # uname 00:17:34.366 06:56:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:34.366 06:56:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83719 00:17:34.366 06:56:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:34.366 06:56:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:34.366 killing process with pid 83719 00:17:34.366 06:56:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83719' 00:17:34.366 06:56:38 -- common/autotest_common.sh@955 -- # kill 83719 00:17:34.366 06:56:38 -- common/autotest_common.sh@960 -- # wait 83719 00:17:34.638 00:17:34.638 real 0m15.809s 00:17:34.638 user 0m30.959s 00:17:34.638 sys 0m4.407s 00:17:34.638 06:56:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:34.638 06:56:39 -- common/autotest_common.sh@10 -- # set +x 00:17:34.638 ************************************ 00:17:34.638 END TEST nvmf_digest_error 00:17:34.638 ************************************ 00:17:34.638 06:56:39 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:17:34.638 06:56:39 -- host/digest.sh@139 -- # nvmftestfini 00:17:34.638 06:56:39 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:34.638 06:56:39 -- nvmf/common.sh@116 -- # sync 00:17:34.638 06:56:39 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:34.638 06:56:39 -- nvmf/common.sh@119 -- # set +e 00:17:34.638 06:56:39 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:34.638 06:56:39 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:34.638 rmmod nvme_tcp 00:17:34.638 rmmod nvme_fabrics 00:17:34.638 rmmod nvme_keyring 00:17:34.638 06:56:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:34.638 06:56:39 -- nvmf/common.sh@123 -- # set -e 00:17:34.638 06:56:39 -- nvmf/common.sh@124 -- # return 0 00:17:34.638 06:56:39 -- nvmf/common.sh@477 -- # '[' -n 83719 ']' 00:17:34.638 06:56:39 -- nvmf/common.sh@478 -- # killprocess 83719 00:17:34.638 06:56:39 -- common/autotest_common.sh@936 -- # '[' -z 83719 ']' 00:17:34.901 Process with pid 83719 is not found 00:17:34.901 06:56:39 -- common/autotest_common.sh@940 -- # kill -0 83719 00:17:34.901 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (83719) - No such process 00:17:34.901 06:56:39 -- common/autotest_common.sh@963 -- # echo 'Process with pid 83719 is not found' 00:17:34.901 06:56:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:34.901 06:56:39 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:34.901 06:56:39 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:34.901 06:56:39 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:34.901 06:56:39 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:34.901 06:56:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:34.901 06:56:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:34.901 06:56:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:34.901 06:56:39 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:34.901 ************************************ 00:17:34.901 END TEST nvmf_digest 00:17:34.901 ************************************ 00:17:34.901 00:17:34.901 real 0m31.848s 00:17:34.901 user 1m1.066s 00:17:34.901 sys 0m9.025s 00:17:34.901 06:56:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:34.901 06:56:39 -- common/autotest_common.sh@10 -- # set +x 00:17:34.901 06:56:39 -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:17:34.901 06:56:39 -- nvmf/nvmf.sh@115 -- # [[ 1 -eq 1 ]] 00:17:34.901 06:56:39 -- nvmf/nvmf.sh@116 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:17:34.901 06:56:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:34.901 06:56:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:34.901 06:56:39 -- common/autotest_common.sh@10 -- # set +x 00:17:34.901 ************************************ 00:17:34.901 START TEST nvmf_multipath 00:17:34.901 ************************************ 00:17:34.901 06:56:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:17:34.901 * Looking for test storage... 00:17:34.901 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:34.901 06:56:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:34.901 06:56:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:34.901 06:56:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:34.901 06:56:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:34.901 06:56:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:34.901 06:56:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:34.901 06:56:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:34.901 06:56:39 -- scripts/common.sh@335 -- # IFS=.-: 00:17:34.901 06:56:39 -- scripts/common.sh@335 -- # read -ra ver1 00:17:34.901 06:56:39 -- scripts/common.sh@336 -- # IFS=.-: 00:17:34.901 06:56:39 -- scripts/common.sh@336 -- # read -ra ver2 00:17:34.901 06:56:39 -- scripts/common.sh@337 -- # local 'op=<' 00:17:34.901 06:56:39 -- scripts/common.sh@339 -- # ver1_l=2 00:17:34.901 06:56:39 -- scripts/common.sh@340 -- # ver2_l=1 00:17:34.901 06:56:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:34.901 06:56:39 -- scripts/common.sh@343 -- # case "$op" in 00:17:34.901 06:56:39 -- scripts/common.sh@344 -- # : 1 00:17:34.901 06:56:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:34.901 06:56:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:34.901 06:56:39 -- scripts/common.sh@364 -- # decimal 1 00:17:34.901 06:56:39 -- scripts/common.sh@352 -- # local d=1 00:17:34.901 06:56:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:34.901 06:56:39 -- scripts/common.sh@354 -- # echo 1 00:17:34.901 06:56:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:34.901 06:56:39 -- scripts/common.sh@365 -- # decimal 2 00:17:34.901 06:56:39 -- scripts/common.sh@352 -- # local d=2 00:17:34.901 06:56:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:34.901 06:56:39 -- scripts/common.sh@354 -- # echo 2 00:17:34.901 06:56:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:34.901 06:56:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:34.901 06:56:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:34.901 06:56:39 -- scripts/common.sh@367 -- # return 0 00:17:35.159 06:56:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:35.159 06:56:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:35.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:35.159 --rc genhtml_branch_coverage=1 00:17:35.159 --rc genhtml_function_coverage=1 00:17:35.159 --rc genhtml_legend=1 00:17:35.159 --rc geninfo_all_blocks=1 00:17:35.159 --rc geninfo_unexecuted_blocks=1 00:17:35.159 00:17:35.159 ' 00:17:35.159 06:56:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:35.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:35.159 --rc genhtml_branch_coverage=1 00:17:35.159 --rc genhtml_function_coverage=1 00:17:35.159 --rc genhtml_legend=1 00:17:35.159 --rc geninfo_all_blocks=1 00:17:35.159 --rc geninfo_unexecuted_blocks=1 00:17:35.159 00:17:35.159 ' 00:17:35.159 06:56:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:35.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:35.159 --rc genhtml_branch_coverage=1 00:17:35.159 --rc genhtml_function_coverage=1 00:17:35.159 --rc genhtml_legend=1 00:17:35.159 --rc geninfo_all_blocks=1 00:17:35.159 --rc geninfo_unexecuted_blocks=1 00:17:35.159 00:17:35.159 ' 00:17:35.159 06:56:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:35.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:35.159 --rc genhtml_branch_coverage=1 00:17:35.159 --rc genhtml_function_coverage=1 00:17:35.159 --rc genhtml_legend=1 00:17:35.159 --rc geninfo_all_blocks=1 00:17:35.159 --rc geninfo_unexecuted_blocks=1 00:17:35.159 00:17:35.159 ' 00:17:35.159 06:56:39 -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:35.159 06:56:39 -- nvmf/common.sh@7 -- # uname -s 00:17:35.159 06:56:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:35.159 06:56:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:35.159 06:56:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:35.159 06:56:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:35.159 06:56:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:35.159 06:56:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:35.159 06:56:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:35.159 06:56:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:35.159 06:56:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:35.159 06:56:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:35.159 06:56:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:657f0c9c-3891-4064-9841-3d87a573b6e7 00:17:35.159 06:56:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=657f0c9c-3891-4064-9841-3d87a573b6e7 00:17:35.160 06:56:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:35.160 06:56:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:35.160 06:56:39 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:35.160 06:56:39 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:35.160 06:56:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:35.160 06:56:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:35.160 06:56:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:35.160 06:56:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.160 06:56:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.160 06:56:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.160 06:56:39 -- paths/export.sh@5 -- # export PATH 00:17:35.160 06:56:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.160 06:56:39 -- nvmf/common.sh@46 -- # : 0 00:17:35.160 06:56:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:35.160 06:56:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:35.160 06:56:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:35.160 06:56:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:35.160 06:56:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:35.160 06:56:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:35.160 06:56:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:35.160 06:56:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:35.160 06:56:39 -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:35.160 06:56:39 -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:35.160 06:56:39 -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:35.160 06:56:39 -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:17:35.160 06:56:39 -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:35.160 06:56:39 -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:17:35.160 06:56:39 -- host/multipath.sh@30 -- # nvmftestinit 00:17:35.160 06:56:39 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:35.160 06:56:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:35.160 06:56:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:35.160 06:56:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:35.160 06:56:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:35.160 06:56:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:35.160 06:56:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:35.160 06:56:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:35.160 06:56:39 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:35.160 06:56:39 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:35.160 06:56:39 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:35.160 06:56:39 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:35.160 06:56:39 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:35.160 06:56:39 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:35.160 06:56:39 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:35.160 06:56:39 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:35.160 06:56:39 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:35.160 06:56:39 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:35.160 06:56:39 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:35.160 06:56:39 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:35.160 06:56:39 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:35.160 06:56:39 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:35.160 06:56:39 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:35.160 06:56:39 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:35.160 06:56:39 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:35.160 06:56:39 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:35.160 06:56:39 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:35.160 06:56:39 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:35.160 Cannot find device "nvmf_tgt_br" 00:17:35.160 06:56:39 -- nvmf/common.sh@154 -- # true 00:17:35.160 06:56:39 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:35.160 Cannot find device "nvmf_tgt_br2" 00:17:35.160 06:56:39 -- nvmf/common.sh@155 -- # true 00:17:35.160 06:56:39 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:35.160 06:56:39 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:35.160 Cannot find device "nvmf_tgt_br" 00:17:35.160 06:56:39 -- nvmf/common.sh@157 -- # true 00:17:35.160 06:56:39 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:35.160 Cannot find device "nvmf_tgt_br2" 00:17:35.160 06:56:39 -- nvmf/common.sh@158 -- # true 00:17:35.160 06:56:39 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:35.160 06:56:39 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:35.160 06:56:39 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:35.160 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:35.160 06:56:39 -- nvmf/common.sh@161 -- # true 00:17:35.160 06:56:39 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:35.160 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:35.160 06:56:39 -- nvmf/common.sh@162 -- # true 00:17:35.160 06:56:39 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:35.160 06:56:39 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:35.160 06:56:39 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:35.160 06:56:39 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:35.160 06:56:39 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:35.160 06:56:39 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:35.160 06:56:39 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:35.418 06:56:39 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:35.418 06:56:39 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:35.418 06:56:39 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:35.418 06:56:39 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:35.418 06:56:39 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:35.418 06:56:39 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:35.418 06:56:39 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:35.418 06:56:39 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:35.418 06:56:39 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:35.418 06:56:39 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:35.418 06:56:39 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:35.418 06:56:39 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:35.418 06:56:39 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:35.418 06:56:39 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:35.418 06:56:39 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:35.418 06:56:39 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:35.418 06:56:39 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:35.418 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:35.418 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:17:35.418 00:17:35.418 --- 10.0.0.2 ping statistics --- 00:17:35.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.418 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:17:35.418 06:56:39 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:35.418 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:35.418 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:17:35.418 00:17:35.418 --- 10.0.0.3 ping statistics --- 00:17:35.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.418 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:17:35.418 06:56:39 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:35.418 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:35.418 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:17:35.418 00:17:35.418 --- 10.0.0.1 ping statistics --- 00:17:35.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.418 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:17:35.418 06:56:39 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:35.418 06:56:39 -- nvmf/common.sh@421 -- # return 0 00:17:35.418 06:56:39 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:35.418 06:56:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:35.418 06:56:39 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:35.418 06:56:39 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:35.418 06:56:39 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:35.418 06:56:39 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:35.418 06:56:39 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:35.418 06:56:39 -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:17:35.418 06:56:39 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:35.418 06:56:39 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:35.418 06:56:39 -- common/autotest_common.sh@10 -- # set +x 00:17:35.418 06:56:39 -- nvmf/common.sh@469 -- # nvmfpid=84168 00:17:35.418 06:56:39 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:17:35.418 06:56:39 -- nvmf/common.sh@470 -- # waitforlisten 84168 00:17:35.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:35.418 06:56:39 -- common/autotest_common.sh@829 -- # '[' -z 84168 ']' 00:17:35.418 06:56:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:35.418 06:56:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:35.418 06:56:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:35.418 06:56:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:35.418 06:56:39 -- common/autotest_common.sh@10 -- # set +x 00:17:35.418 [2024-12-13 06:56:39.881299] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:35.418 [2024-12-13 06:56:39.881654] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:35.677 [2024-12-13 06:56:40.025008] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:35.677 [2024-12-13 06:56:40.068051] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:35.677 [2024-12-13 06:56:40.068464] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:35.677 [2024-12-13 06:56:40.068616] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:35.677 [2024-12-13 06:56:40.068786] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:35.677 [2024-12-13 06:56:40.069044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:35.677 [2024-12-13 06:56:40.069066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:36.612 06:56:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:36.612 06:56:40 -- common/autotest_common.sh@862 -- # return 0 00:17:36.612 06:56:40 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:36.612 06:56:40 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:36.612 06:56:40 -- common/autotest_common.sh@10 -- # set +x 00:17:36.612 06:56:40 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:36.612 06:56:40 -- host/multipath.sh@33 -- # nvmfapp_pid=84168 00:17:36.612 06:56:40 -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:36.871 [2024-12-13 06:56:41.175160] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:36.871 06:56:41 -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:37.129 Malloc0 00:17:37.129 06:56:41 -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:17:37.388 06:56:41 -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:37.388 06:56:41 -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:37.648 [2024-12-13 06:56:42.108488] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:37.648 06:56:42 -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:37.907 [2024-12-13 06:56:42.336700] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:37.907 06:56:42 -- host/multipath.sh@44 -- # bdevperf_pid=84222 00:17:37.907 06:56:42 -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:17:37.907 06:56:42 -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:37.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:37.907 06:56:42 -- host/multipath.sh@47 -- # waitforlisten 84222 /var/tmp/bdevperf.sock 00:17:37.907 06:56:42 -- common/autotest_common.sh@829 -- # '[' -z 84222 ']' 00:17:37.907 06:56:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:37.907 06:56:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:37.907 06:56:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:37.907 06:56:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:37.907 06:56:42 -- common/autotest_common.sh@10 -- # set +x 00:17:38.844 06:56:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:38.844 06:56:43 -- common/autotest_common.sh@862 -- # return 0 00:17:38.844 06:56:43 -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:17:39.102 06:56:43 -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:17:39.669 Nvme0n1 00:17:39.669 06:56:43 -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:17:39.928 Nvme0n1 00:17:39.928 06:56:44 -- host/multipath.sh@78 -- # sleep 1 00:17:39.928 06:56:44 -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:17:40.864 06:56:45 -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:17:40.864 06:56:45 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:41.123 06:56:45 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:41.382 06:56:45 -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:17:41.382 06:56:45 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 84168 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:41.382 06:56:45 -- host/multipath.sh@65 -- # dtrace_pid=84273 00:17:41.382 06:56:45 -- host/multipath.sh@66 -- # sleep 6 00:17:47.948 06:56:51 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:47.948 06:56:51 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:17:47.948 06:56:52 -- host/multipath.sh@67 -- # active_port=4421 00:17:47.948 06:56:52 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:47.948 Attaching 4 probes... 00:17:47.948 @path[10.0.0.2, 4421]: 19703 00:17:47.948 @path[10.0.0.2, 4421]: 20217 00:17:47.948 @path[10.0.0.2, 4421]: 20009 00:17:47.948 @path[10.0.0.2, 4421]: 20071 00:17:47.948 @path[10.0.0.2, 4421]: 20104 00:17:47.948 06:56:52 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:47.948 06:56:52 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:47.948 06:56:52 -- host/multipath.sh@69 -- # sed -n 1p 00:17:47.948 06:56:52 -- host/multipath.sh@69 -- # port=4421 00:17:47.948 06:56:52 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:17:47.948 06:56:52 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:17:47.948 06:56:52 -- host/multipath.sh@72 -- # kill 84273 00:17:47.948 06:56:52 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:47.948 06:56:52 -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:17:47.948 06:56:52 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:47.948 06:56:52 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:17:48.207 06:56:52 -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:17:48.207 06:56:52 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 84168 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:48.207 06:56:52 -- host/multipath.sh@65 -- # dtrace_pid=84382 00:17:48.207 06:56:52 -- host/multipath.sh@66 -- # sleep 6 00:17:54.777 06:56:58 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:54.777 06:56:58 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:17:54.777 06:56:58 -- host/multipath.sh@67 -- # active_port=4420 00:17:54.777 06:56:58 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:54.777 Attaching 4 probes... 00:17:54.777 @path[10.0.0.2, 4420]: 19912 00:17:54.777 @path[10.0.0.2, 4420]: 20054 00:17:54.777 @path[10.0.0.2, 4420]: 20261 00:17:54.777 @path[10.0.0.2, 4420]: 20082 00:17:54.777 @path[10.0.0.2, 4420]: 20364 00:17:54.777 06:56:58 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:54.777 06:56:58 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:54.777 06:56:58 -- host/multipath.sh@69 -- # sed -n 1p 00:17:54.777 06:56:58 -- host/multipath.sh@69 -- # port=4420 00:17:54.777 06:56:58 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:17:54.777 06:56:58 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:17:54.777 06:56:58 -- host/multipath.sh@72 -- # kill 84382 00:17:54.777 06:56:58 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:54.777 06:56:58 -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:17:54.777 06:56:58 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:17:54.777 06:56:59 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:55.036 06:56:59 -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:17:55.036 06:56:59 -- host/multipath.sh@65 -- # dtrace_pid=84500 00:17:55.036 06:56:59 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 84168 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:55.036 06:56:59 -- host/multipath.sh@66 -- # sleep 6 00:18:01.599 06:57:05 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:01.599 06:57:05 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:01.599 06:57:05 -- host/multipath.sh@67 -- # active_port=4421 00:18:01.599 06:57:05 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:01.599 Attaching 4 probes... 00:18:01.599 @path[10.0.0.2, 4421]: 14870 00:18:01.599 @path[10.0.0.2, 4421]: 19664 00:18:01.599 @path[10.0.0.2, 4421]: 19353 00:18:01.599 @path[10.0.0.2, 4421]: 19355 00:18:01.599 @path[10.0.0.2, 4421]: 19820 00:18:01.599 06:57:05 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:01.599 06:57:05 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:01.599 06:57:05 -- host/multipath.sh@69 -- # sed -n 1p 00:18:01.599 06:57:05 -- host/multipath.sh@69 -- # port=4421 00:18:01.599 06:57:05 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:01.599 06:57:05 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:01.599 06:57:05 -- host/multipath.sh@72 -- # kill 84500 00:18:01.599 06:57:05 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:01.599 06:57:05 -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:18:01.599 06:57:05 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:18:01.599 06:57:05 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:18:01.861 06:57:06 -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:18:01.861 06:57:06 -- host/multipath.sh@65 -- # dtrace_pid=84618 00:18:01.861 06:57:06 -- host/multipath.sh@66 -- # sleep 6 00:18:01.861 06:57:06 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 84168 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:08.434 06:57:12 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:08.434 06:57:12 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:18:08.434 06:57:12 -- host/multipath.sh@67 -- # active_port= 00:18:08.434 06:57:12 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:08.434 Attaching 4 probes... 00:18:08.434 00:18:08.434 00:18:08.434 00:18:08.434 00:18:08.434 00:18:08.434 06:57:12 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:08.434 06:57:12 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:08.434 06:57:12 -- host/multipath.sh@69 -- # sed -n 1p 00:18:08.434 06:57:12 -- host/multipath.sh@69 -- # port= 00:18:08.434 06:57:12 -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:18:08.434 06:57:12 -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:18:08.434 06:57:12 -- host/multipath.sh@72 -- # kill 84618 00:18:08.434 06:57:12 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:08.434 06:57:12 -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:18:08.434 06:57:12 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:08.434 06:57:12 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:08.693 06:57:13 -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:18:08.693 06:57:13 -- host/multipath.sh@65 -- # dtrace_pid=84736 00:18:08.693 06:57:13 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 84168 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:08.693 06:57:13 -- host/multipath.sh@66 -- # sleep 6 00:18:15.257 06:57:19 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:15.257 06:57:19 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:15.257 06:57:19 -- host/multipath.sh@67 -- # active_port=4421 00:18:15.257 06:57:19 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:15.257 Attaching 4 probes... 00:18:15.257 @path[10.0.0.2, 4421]: 18879 00:18:15.257 @path[10.0.0.2, 4421]: 19951 00:18:15.257 @path[10.0.0.2, 4421]: 19442 00:18:15.257 @path[10.0.0.2, 4421]: 19379 00:18:15.257 @path[10.0.0.2, 4421]: 19043 00:18:15.257 06:57:19 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:15.257 06:57:19 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:15.257 06:57:19 -- host/multipath.sh@69 -- # sed -n 1p 00:18:15.257 06:57:19 -- host/multipath.sh@69 -- # port=4421 00:18:15.257 06:57:19 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:15.257 06:57:19 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:15.257 06:57:19 -- host/multipath.sh@72 -- # kill 84736 00:18:15.257 06:57:19 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:15.257 06:57:19 -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:15.257 [2024-12-13 06:57:19.724133] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ead80 is same with the state(5) to be set 00:18:15.257 [2024-12-13 06:57:19.724463] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ead80 is same with the state(5) to be set 00:18:15.257 [2024-12-13 06:57:19.724480] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ead80 is same with the state(5) to be set 00:18:15.257 [2024-12-13 06:57:19.724489] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ead80 is same with the state(5) to be set 00:18:15.257 [2024-12-13 06:57:19.724498] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ead80 is same with the state(5) to be set 00:18:15.257 [2024-12-13 06:57:19.724506] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ead80 is same with the state(5) to be set 00:18:15.257 [2024-12-13 06:57:19.724514] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ead80 is same with the state(5) to be set 00:18:15.257 [2024-12-13 06:57:19.724523] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ead80 is same with the state(5) to be set 00:18:15.257 [2024-12-13 06:57:19.724531] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ead80 is same with the state(5) to be set 00:18:15.257 [2024-12-13 06:57:19.724539] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ead80 is same with the state(5) to be set 00:18:15.257 [2024-12-13 06:57:19.724548] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ead80 is same with the state(5) to be set 00:18:15.257 [2024-12-13 06:57:19.724556] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ead80 is same with the state(5) to be set 00:18:15.257 [2024-12-13 06:57:19.724564] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ead80 is same with the state(5) to be set 00:18:15.257 [2024-12-13 06:57:19.724573] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ead80 is same with the state(5) to be set 00:18:15.257 [2024-12-13 06:57:19.724581] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ead80 is same with the state(5) to be set 00:18:15.257 [2024-12-13 06:57:19.724590] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ead80 is same with the state(5) to be set 00:18:15.257 [2024-12-13 06:57:19.724598] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ead80 is same with the state(5) to be set 00:18:15.257 [2024-12-13 06:57:19.724606] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ead80 is same with the state(5) to be set 00:18:15.257 [2024-12-13 06:57:19.724615] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ead80 is same with the state(5) to be set 00:18:15.257 [2024-12-13 06:57:19.724623] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ead80 is same with the state(5) to be set 00:18:15.257 [2024-12-13 06:57:19.724631] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ead80 is same with the state(5) to be set 00:18:15.257 [2024-12-13 06:57:19.724639] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ead80 is same with the state(5) to be set 00:18:15.257 [2024-12-13 06:57:19.724647] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ead80 is same with the state(5) to be set 00:18:15.257 [2024-12-13 06:57:19.724655] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ead80 is same with the state(5) to be set 00:18:15.257 [2024-12-13 06:57:19.724663] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ead80 is same with the state(5) to be set 00:18:15.257 [2024-12-13 06:57:19.724671] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ead80 is same with the state(5) to be set 00:18:15.257 [2024-12-13 06:57:19.724679] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ead80 is same with the state(5) to be set 00:18:15.257 [2024-12-13 06:57:19.724687] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ead80 is same with the state(5) to be set 00:18:15.257 [2024-12-13 06:57:19.724701] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ead80 is same with the state(5) to be set 00:18:15.257 [2024-12-13 06:57:19.724710] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ead80 is same with the state(5) to be set 00:18:15.257 [2024-12-13 06:57:19.724718] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ead80 is same with the state(5) to be set 00:18:15.257 [2024-12-13 06:57:19.724727] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ead80 is same with the state(5) to be set 00:18:15.257 [2024-12-13 06:57:19.724736] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ead80 is same with the state(5) to be set 00:18:15.257 [2024-12-13 06:57:19.724744] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ead80 is same with the state(5) to be set 00:18:15.257 [2024-12-13 06:57:19.724753] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ead80 is same with the state(5) to be set 00:18:15.257 [2024-12-13 06:57:19.724761] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ead80 is same with the state(5) to be set 00:18:15.257 [2024-12-13 06:57:19.724769] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ead80 is same with the state(5) to be set 00:18:15.257 [2024-12-13 06:57:19.724792] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ead80 is same with the state(5) to be set 00:18:15.257 [2024-12-13 06:57:19.724800] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ead80 is same with the state(5) to be set 00:18:15.258 [2024-12-13 06:57:19.724808] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ead80 is same with the state(5) to be set 00:18:15.258 [2024-12-13 06:57:19.724816] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ead80 is same with the state(5) to be set 00:18:15.258 [2024-12-13 06:57:19.724823] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ead80 is same with the state(5) to be set 00:18:15.258 [2024-12-13 06:57:19.724831] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ead80 is same with the state(5) to be set 00:18:15.258 [2024-12-13 06:57:19.724839] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ead80 is same with the state(5) to be set 00:18:15.258 [2024-12-13 06:57:19.724847] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ead80 is same with the state(5) to be set 00:18:15.258 [2024-12-13 06:57:19.724855] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ead80 is same with the state(5) to be set 00:18:15.258 06:57:19 -- host/multipath.sh@101 -- # sleep 1 00:18:16.635 06:57:20 -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:18:16.635 06:57:20 -- host/multipath.sh@65 -- # dtrace_pid=84859 00:18:16.635 06:57:20 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 84168 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:16.635 06:57:20 -- host/multipath.sh@66 -- # sleep 6 00:18:23.223 06:57:26 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:23.223 06:57:26 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:18:23.223 06:57:27 -- host/multipath.sh@67 -- # active_port=4420 00:18:23.223 06:57:27 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:23.223 Attaching 4 probes... 00:18:23.223 @path[10.0.0.2, 4420]: 18231 00:18:23.223 @path[10.0.0.2, 4420]: 18841 00:18:23.223 @path[10.0.0.2, 4420]: 19027 00:18:23.223 @path[10.0.0.2, 4420]: 19965 00:18:23.223 @path[10.0.0.2, 4420]: 19341 00:18:23.223 06:57:27 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:23.223 06:57:27 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:23.223 06:57:27 -- host/multipath.sh@69 -- # sed -n 1p 00:18:23.223 06:57:27 -- host/multipath.sh@69 -- # port=4420 00:18:23.223 06:57:27 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:18:23.223 06:57:27 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:18:23.223 06:57:27 -- host/multipath.sh@72 -- # kill 84859 00:18:23.223 06:57:27 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:23.223 06:57:27 -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:23.223 [2024-12-13 06:57:27.292490] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:23.223 06:57:27 -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:23.223 06:57:27 -- host/multipath.sh@111 -- # sleep 6 00:18:29.790 06:57:33 -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:18:29.790 06:57:33 -- host/multipath.sh@65 -- # dtrace_pid=85039 00:18:29.790 06:57:33 -- host/multipath.sh@66 -- # sleep 6 00:18:29.790 06:57:33 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 84168 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:35.148 06:57:39 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:35.148 06:57:39 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:35.407 06:57:39 -- host/multipath.sh@67 -- # active_port=4421 00:18:35.407 06:57:39 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:35.407 Attaching 4 probes... 00:18:35.407 @path[10.0.0.2, 4421]: 19411 00:18:35.407 @path[10.0.0.2, 4421]: 19320 00:18:35.407 @path[10.0.0.2, 4421]: 19334 00:18:35.407 @path[10.0.0.2, 4421]: 19416 00:18:35.407 @path[10.0.0.2, 4421]: 19457 00:18:35.407 06:57:39 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:35.407 06:57:39 -- host/multipath.sh@69 -- # sed -n 1p 00:18:35.407 06:57:39 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:35.407 06:57:39 -- host/multipath.sh@69 -- # port=4421 00:18:35.407 06:57:39 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:35.407 06:57:39 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:35.407 06:57:39 -- host/multipath.sh@72 -- # kill 85039 00:18:35.407 06:57:39 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:35.407 06:57:39 -- host/multipath.sh@114 -- # killprocess 84222 00:18:35.407 06:57:39 -- common/autotest_common.sh@936 -- # '[' -z 84222 ']' 00:18:35.407 06:57:39 -- common/autotest_common.sh@940 -- # kill -0 84222 00:18:35.407 06:57:39 -- common/autotest_common.sh@941 -- # uname 00:18:35.407 06:57:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:35.407 06:57:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84222 00:18:35.408 killing process with pid 84222 00:18:35.408 06:57:39 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:35.408 06:57:39 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:35.408 06:57:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84222' 00:18:35.408 06:57:39 -- common/autotest_common.sh@955 -- # kill 84222 00:18:35.408 06:57:39 -- common/autotest_common.sh@960 -- # wait 84222 00:18:35.675 Connection closed with partial response: 00:18:35.676 00:18:35.676 00:18:35.676 06:57:40 -- host/multipath.sh@116 -- # wait 84222 00:18:35.676 06:57:40 -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:35.676 [2024-12-13 06:56:42.394314] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:35.676 [2024-12-13 06:56:42.394429] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84222 ] 00:18:35.676 [2024-12-13 06:56:42.527495] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:35.676 [2024-12-13 06:56:42.560038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:35.676 Running I/O for 90 seconds... 00:18:35.676 [2024-12-13 06:56:52.534019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:96656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.676 [2024-12-13 06:56:52.534098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:35.676 [2024-12-13 06:56:52.534171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:96664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.676 [2024-12-13 06:56:52.534191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:35.676 [2024-12-13 06:56:52.534214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:96672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.676 [2024-12-13 06:56:52.534230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:35.676 [2024-12-13 06:56:52.534252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.676 [2024-12-13 06:56:52.534266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:35.676 [2024-12-13 06:56:52.534287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:96688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.676 [2024-12-13 06:56:52.534302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:35.676 [2024-12-13 06:56:52.534322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:96696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.676 [2024-12-13 06:56:52.534337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:35.676 [2024-12-13 06:56:52.534357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:96704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.676 [2024-12-13 06:56:52.534387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:35.676 [2024-12-13 06:56:52.534410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:96712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.676 [2024-12-13 06:56:52.534426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:35.676 [2024-12-13 06:56:52.534446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:96720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.676 [2024-12-13 06:56:52.534461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:35.676 [2024-12-13 06:56:52.534481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:96728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.676 [2024-12-13 06:56:52.534496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:35.676 [2024-12-13 06:56:52.534516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.676 [2024-12-13 06:56:52.534544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:35.676 [2024-12-13 06:56:52.534567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:96744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.676 [2024-12-13 06:56:52.534583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:35.676 [2024-12-13 06:56:52.534604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:96048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.676 [2024-12-13 06:56:52.534618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:35.676 [2024-12-13 06:56:52.534638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:96064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.676 [2024-12-13 06:56:52.534654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:35.676 [2024-12-13 06:56:52.534674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:96072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.676 [2024-12-13 06:56:52.534689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:35.676 [2024-12-13 06:56:52.534710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:96080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.676 [2024-12-13 06:56:52.534725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:35.676 [2024-12-13 06:56:52.534745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:96088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.676 [2024-12-13 06:56:52.534760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:35.676 [2024-12-13 06:56:52.534780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:96096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.676 [2024-12-13 06:56:52.534795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:35.676 [2024-12-13 06:56:52.534815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:96128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.676 [2024-12-13 06:56:52.534830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:35.676 [2024-12-13 06:56:52.534850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:96136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.676 [2024-12-13 06:56:52.534865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:35.676 [2024-12-13 06:56:52.534886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.676 [2024-12-13 06:56:52.534901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.676 [2024-12-13 06:56:52.534921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:96760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.676 [2024-12-13 06:56:52.534935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:35.676 [2024-12-13 06:56:52.534956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:96768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.676 [2024-12-13 06:56:52.534970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:35.676 [2024-12-13 06:56:52.535068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.676 [2024-12-13 06:56:52.535091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:35.676 [2024-12-13 06:56:52.535113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:96144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.676 [2024-12-13 06:56:52.535128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:35.676 [2024-12-13 06:56:52.535149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:96152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.676 [2024-12-13 06:56:52.535164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:35.676 [2024-12-13 06:56:52.535184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:96160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.676 [2024-12-13 06:56:52.535199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:35.676 [2024-12-13 06:56:52.535219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.676 [2024-12-13 06:56:52.535234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:35.676 [2024-12-13 06:56:52.535255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:96184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.676 [2024-12-13 06:56:52.535269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:35.676 [2024-12-13 06:56:52.535290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:96200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.676 [2024-12-13 06:56:52.535306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:35.676 [2024-12-13 06:56:52.535326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:96208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.676 [2024-12-13 06:56:52.535342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:35.676 [2024-12-13 06:56:52.535378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:96224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.676 [2024-12-13 06:56:52.535395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:35.676 [2024-12-13 06:56:52.535415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:96784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.676 [2024-12-13 06:56:52.535431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:35.676 [2024-12-13 06:56:52.535452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:96792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.676 [2024-12-13 06:56:52.535466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:35.677 [2024-12-13 06:56:52.535487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:96800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.677 [2024-12-13 06:56:52.535502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:35.677 [2024-12-13 06:56:52.535531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:96808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.677 [2024-12-13 06:56:52.535548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:35.677 [2024-12-13 06:56:52.535569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:96816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.677 [2024-12-13 06:56:52.535584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:35.677 [2024-12-13 06:56:52.535605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:96824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.677 [2024-12-13 06:56:52.535620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:35.677 [2024-12-13 06:56:52.535641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:96832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.677 [2024-12-13 06:56:52.535656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:35.677 [2024-12-13 06:56:52.535676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:96840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.677 [2024-12-13 06:56:52.535691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:35.677 [2024-12-13 06:56:52.535712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:96848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.677 [2024-12-13 06:56:52.535727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:35.677 [2024-12-13 06:56:52.535747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:96856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.677 [2024-12-13 06:56:52.535762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:35.677 [2024-12-13 06:56:52.535783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:96864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.677 [2024-12-13 06:56:52.535798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:35.677 [2024-12-13 06:56:52.535818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:96232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.677 [2024-12-13 06:56:52.535833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:35.677 [2024-12-13 06:56:52.535853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:96240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.677 [2024-12-13 06:56:52.535897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:35.677 [2024-12-13 06:56:52.535921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:96248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.677 [2024-12-13 06:56:52.535937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:35.677 [2024-12-13 06:56:52.535960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:96296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.677 [2024-12-13 06:56:52.535976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:35.677 [2024-12-13 06:56:52.535999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:96320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.677 [2024-12-13 06:56:52.536024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:35.677 [2024-12-13 06:56:52.536047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:96328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.677 [2024-12-13 06:56:52.536064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:35.677 [2024-12-13 06:56:52.536086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:96336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.677 [2024-12-13 06:56:52.536103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:35.677 [2024-12-13 06:56:52.536125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:96376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.677 [2024-12-13 06:56:52.536141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:35.677 [2024-12-13 06:56:52.536164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:96872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.677 [2024-12-13 06:56:52.536196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:35.677 [2024-12-13 06:56:52.536231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.677 [2024-12-13 06:56:52.536246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.677 [2024-12-13 06:56:52.536268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:96888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.677 [2024-12-13 06:56:52.536283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:35.677 [2024-12-13 06:56:52.536305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:96896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.677 [2024-12-13 06:56:52.536321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:35.677 [2024-12-13 06:56:52.536341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:96904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.677 [2024-12-13 06:56:52.536356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:35.677 [2024-12-13 06:56:52.536377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:96912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.677 [2024-12-13 06:56:52.536405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:35.677 [2024-12-13 06:56:52.536431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:96920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.677 [2024-12-13 06:56:52.536447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:35.677 [2024-12-13 06:56:52.536495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:96928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.677 [2024-12-13 06:56:52.536515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:35.677 [2024-12-13 06:56:52.536538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:96936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.677 [2024-12-13 06:56:52.536562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:35.677 [2024-12-13 06:56:52.536585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:96944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.677 [2024-12-13 06:56:52.536617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:35.677 [2024-12-13 06:56:52.536639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:96952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.677 [2024-12-13 06:56:52.536655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:35.677 [2024-12-13 06:56:52.536677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:96960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.677 [2024-12-13 06:56:52.536693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:35.677 [2024-12-13 06:56:52.536715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.677 [2024-12-13 06:56:52.536730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:35.677 [2024-12-13 06:56:52.536752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:96976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.677 [2024-12-13 06:56:52.536768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:35.677 [2024-12-13 06:56:52.536790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:96984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.677 [2024-12-13 06:56:52.536805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:35.677 [2024-12-13 06:56:52.536827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:96992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.677 [2024-12-13 06:56:52.536843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:35.677 [2024-12-13 06:56:52.536865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:97000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.677 [2024-12-13 06:56:52.536881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:35.677 [2024-12-13 06:56:52.536903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:97008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.677 [2024-12-13 06:56:52.536918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:35.677 [2024-12-13 06:56:52.536940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:97016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.677 [2024-12-13 06:56:52.536959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:35.677 [2024-12-13 06:56:52.536997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:97024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.677 [2024-12-13 06:56:52.537012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:35.677 [2024-12-13 06:56:52.537033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:97032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.677 [2024-12-13 06:56:52.537048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:35.678 [2024-12-13 06:56:52.537077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.678 [2024-12-13 06:56:52.537093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:35.678 [2024-12-13 06:56:52.537114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:96408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.678 [2024-12-13 06:56:52.537129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:35.678 [2024-12-13 06:56:52.537150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:96440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.678 [2024-12-13 06:56:52.537165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:35.678 [2024-12-13 06:56:52.537187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:96456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.678 [2024-12-13 06:56:52.537202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:35.678 [2024-12-13 06:56:52.537223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:96472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.678 [2024-12-13 06:56:52.537238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:35.678 [2024-12-13 06:56:52.537259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:96480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.678 [2024-12-13 06:56:52.537274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:35.678 [2024-12-13 06:56:52.537296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:96504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.678 [2024-12-13 06:56:52.537311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:35.678 [2024-12-13 06:56:52.537332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:96512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.678 [2024-12-13 06:56:52.537347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:35.678 [2024-12-13 06:56:52.537383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:96528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.678 [2024-12-13 06:56:52.537412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:35.678 [2024-12-13 06:56:52.537437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:97048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.678 [2024-12-13 06:56:52.537453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:35.678 [2024-12-13 06:56:52.537474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:97056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.678 [2024-12-13 06:56:52.537491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:35.678 [2024-12-13 06:56:52.537516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:97064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.678 [2024-12-13 06:56:52.537532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:35.678 [2024-12-13 06:56:52.537561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:97072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.678 [2024-12-13 06:56:52.537578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.678 [2024-12-13 06:56:52.537600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:97080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.678 [2024-12-13 06:56:52.537618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:35.678 [2024-12-13 06:56:52.537640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:97088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.678 [2024-12-13 06:56:52.537656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:35.678 [2024-12-13 06:56:52.537678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:97096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.678 [2024-12-13 06:56:52.537694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:35.678 [2024-12-13 06:56:52.537715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:97104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.678 [2024-12-13 06:56:52.537732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:35.678 [2024-12-13 06:56:52.537768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:97112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.678 [2024-12-13 06:56:52.537783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:35.678 [2024-12-13 06:56:52.537804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:97120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.678 [2024-12-13 06:56:52.537819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:35.678 [2024-12-13 06:56:52.537842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:97128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.678 [2024-12-13 06:56:52.537857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:35.678 [2024-12-13 06:56:52.537879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:97136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.678 [2024-12-13 06:56:52.537894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:35.678 [2024-12-13 06:56:52.537915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:97144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.678 [2024-12-13 06:56:52.537931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:35.678 [2024-12-13 06:56:52.537952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:97152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.678 [2024-12-13 06:56:52.537968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:35.678 [2024-12-13 06:56:52.537989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:97160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.678 [2024-12-13 06:56:52.538004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:35.678 [2024-12-13 06:56:52.538025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:97168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.678 [2024-12-13 06:56:52.538048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:35.678 [2024-12-13 06:56:52.538071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:97176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.678 [2024-12-13 06:56:52.538087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:35.678 [2024-12-13 06:56:52.538108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:97184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.678 [2024-12-13 06:56:52.538124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:35.678 [2024-12-13 06:56:52.538148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:97192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.678 [2024-12-13 06:56:52.538164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:35.678 [2024-12-13 06:56:52.538185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:96536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.678 [2024-12-13 06:56:52.538200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:35.678 [2024-12-13 06:56:52.538222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:96544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.678 [2024-12-13 06:56:52.538239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:35.678 [2024-12-13 06:56:52.538261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:96568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.678 [2024-12-13 06:56:52.538277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:35.678 [2024-12-13 06:56:52.538298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:96584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.678 [2024-12-13 06:56:52.538314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:35.678 [2024-12-13 06:56:52.540020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:96592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.678 [2024-12-13 06:56:52.540053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:35.678 [2024-12-13 06:56:52.540082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:96624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.678 [2024-12-13 06:56:52.540100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:35.678 [2024-12-13 06:56:52.540124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:96640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.678 [2024-12-13 06:56:52.540140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:35.678 [2024-12-13 06:56:52.540163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:96648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.678 [2024-12-13 06:56:52.540195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:35.678 [2024-12-13 06:56:52.540231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:97200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.678 [2024-12-13 06:56:52.540258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:35.678 [2024-12-13 06:56:52.540282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:97208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.679 [2024-12-13 06:56:52.540298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:35.679 [2024-12-13 06:56:52.540320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:97216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.679 [2024-12-13 06:56:52.540336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:35.679 [2024-12-13 06:56:52.540358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:97224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.679 [2024-12-13 06:56:52.540373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:35.679 [2024-12-13 06:56:52.540407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:97232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.679 [2024-12-13 06:56:52.540426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:35.679 [2024-12-13 06:56:52.540448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:97240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.679 [2024-12-13 06:56:52.540464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:35.679 [2024-12-13 06:56:52.540486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:97248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.679 [2024-12-13 06:56:52.540502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:35.679 [2024-12-13 06:56:52.540526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:97256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.679 [2024-12-13 06:56:52.540542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.679 [2024-12-13 06:56:52.540564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:97264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.679 [2024-12-13 06:56:52.540579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.679 [2024-12-13 06:56:52.540617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:97272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.679 [2024-12-13 06:56:52.540635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:35.679 [2024-12-13 06:56:52.540657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:97280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.679 [2024-12-13 06:56:52.540673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:35.679 [2024-12-13 06:56:52.540696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:97288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.679 [2024-12-13 06:56:52.540711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:35.679 [2024-12-13 06:56:52.540734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:97296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.679 [2024-12-13 06:56:52.540750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:35.679 [2024-12-13 06:56:52.540781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:97304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.679 [2024-12-13 06:56:52.540799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:35.679 [2024-12-13 06:56:52.540821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:97312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.679 [2024-12-13 06:56:52.540837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:35.679 [2024-12-13 06:56:52.540859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:97320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.679 [2024-12-13 06:56:52.540875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:35.679 [2024-12-13 06:56:52.540897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:97328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.679 [2024-12-13 06:56:52.540913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:35.679 [2024-12-13 06:56:52.540935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:97336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.679 [2024-12-13 06:56:52.540951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:35.679 [2024-12-13 06:56:59.103152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:10720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.679 [2024-12-13 06:56:59.103220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:35.679 [2024-12-13 06:56:59.103293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.679 [2024-12-13 06:56:59.103313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:35.679 [2024-12-13 06:56:59.103336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.679 [2024-12-13 06:56:59.103351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:35.679 [2024-12-13 06:56:59.103401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:10744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.679 [2024-12-13 06:56:59.103421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:35.679 [2024-12-13 06:56:59.103443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:10048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.679 [2024-12-13 06:56:59.103458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:35.679 [2024-12-13 06:56:59.103479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:10080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.679 [2024-12-13 06:56:59.103494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:35.679 [2024-12-13 06:56:59.103515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:10088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.679 [2024-12-13 06:56:59.103530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:35.679 [2024-12-13 06:56:59.103618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:10104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.679 [2024-12-13 06:56:59.103639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:35.679 [2024-12-13 06:56:59.103661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:10112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.679 [2024-12-13 06:56:59.103678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:35.679 [2024-12-13 06:56:59.103700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:10128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.679 [2024-12-13 06:56:59.103717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:35.679 [2024-12-13 06:56:59.103739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.679 [2024-12-13 06:56:59.103754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:35.679 [2024-12-13 06:56:59.103788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:10160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.679 [2024-12-13 06:56:59.103803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:35.679 [2024-12-13 06:56:59.103825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.679 [2024-12-13 06:56:59.103840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:35.679 [2024-12-13 06:56:59.103888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:10760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.679 [2024-12-13 06:56:59.103908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:35.679 [2024-12-13 06:56:59.103932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:10768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.679 [2024-12-13 06:56:59.103949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:35.679 [2024-12-13 06:56:59.103972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:10776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.679 [2024-12-13 06:56:59.103988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.679 [2024-12-13 06:56:59.104011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:10784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.679 [2024-12-13 06:56:59.104028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:35.679 [2024-12-13 06:56:59.104051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:10792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.679 [2024-12-13 06:56:59.104068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:35.679 [2024-12-13 06:56:59.104090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:10800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.679 [2024-12-13 06:56:59.104107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:35.679 [2024-12-13 06:56:59.104156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:10808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.679 [2024-12-13 06:56:59.104189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:35.679 [2024-12-13 06:56:59.104227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:10816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.680 [2024-12-13 06:56:59.104244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:35.680 [2024-12-13 06:56:59.104279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:10824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.680 [2024-12-13 06:56:59.104294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:35.680 [2024-12-13 06:56:59.104315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:10832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.680 [2024-12-13 06:56:59.104330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:35.680 [2024-12-13 06:56:59.104350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:10840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.680 [2024-12-13 06:56:59.104365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:35.680 [2024-12-13 06:56:59.104386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:10848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.680 [2024-12-13 06:56:59.104401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:35.680 [2024-12-13 06:56:59.104506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.680 [2024-12-13 06:56:59.104531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:35.680 [2024-12-13 06:56:59.104576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:10864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.680 [2024-12-13 06:56:59.104619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:35.680 [2024-12-13 06:56:59.104644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.680 [2024-12-13 06:56:59.104661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:35.680 [2024-12-13 06:56:59.104685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:10880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.680 [2024-12-13 06:56:59.104701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:35.680 [2024-12-13 06:56:59.104724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.680 [2024-12-13 06:56:59.104740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:35.680 [2024-12-13 06:56:59.104763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.680 [2024-12-13 06:56:59.104779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:35.680 [2024-12-13 06:56:59.104802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:10240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.680 [2024-12-13 06:56:59.104847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:35.680 [2024-12-13 06:56:59.104871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:10248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.680 [2024-12-13 06:56:59.104903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:35.680 [2024-12-13 06:56:59.104924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.680 [2024-12-13 06:56:59.104954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:35.680 [2024-12-13 06:56:59.104976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:10288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.680 [2024-12-13 06:56:59.104990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:35.680 [2024-12-13 06:56:59.105011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:10312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.680 [2024-12-13 06:56:59.105026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:35.680 [2024-12-13 06:56:59.105046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:10328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.680 [2024-12-13 06:56:59.105062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:35.680 [2024-12-13 06:56:59.105083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:10888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.680 [2024-12-13 06:56:59.105098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:35.680 [2024-12-13 06:56:59.105118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:10896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.680 [2024-12-13 06:56:59.105133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:35.680 [2024-12-13 06:56:59.105154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:10904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.680 [2024-12-13 06:56:59.105169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:35.680 [2024-12-13 06:56:59.105189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:10912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.680 [2024-12-13 06:56:59.105204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:35.680 [2024-12-13 06:56:59.105225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:10920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.680 [2024-12-13 06:56:59.105239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:35.680 [2024-12-13 06:56:59.105260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:10928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.680 [2024-12-13 06:56:59.105275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:35.680 [2024-12-13 06:56:59.105295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:10936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.680 [2024-12-13 06:56:59.105317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:35.680 [2024-12-13 06:56:59.105339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:10944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.680 [2024-12-13 06:56:59.105354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:35.680 [2024-12-13 06:56:59.105375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:10952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.680 [2024-12-13 06:56:59.105390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:35.680 [2024-12-13 06:56:59.105411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:10960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.680 [2024-12-13 06:56:59.105426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.680 [2024-12-13 06:56:59.105462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:10968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.680 [2024-12-13 06:56:59.105482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.680 [2024-12-13 06:56:59.105503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:10976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.680 [2024-12-13 06:56:59.105519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:35.680 [2024-12-13 06:56:59.105539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:10984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.680 [2024-12-13 06:56:59.105580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:35.680 [2024-12-13 06:56:59.105610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:10992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.680 [2024-12-13 06:56:59.105626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:35.680 [2024-12-13 06:56:59.105648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:11000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.680 [2024-12-13 06:56:59.105663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:35.680 [2024-12-13 06:56:59.105684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:11008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.681 [2024-12-13 06:56:59.105700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:35.681 [2024-12-13 06:56:59.105721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:11016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.681 [2024-12-13 06:56:59.105737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:35.681 [2024-12-13 06:56:59.105758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.681 [2024-12-13 06:56:59.105773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:35.681 [2024-12-13 06:56:59.105794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.681 [2024-12-13 06:56:59.105809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:35.681 [2024-12-13 06:56:59.105839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:10336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.681 [2024-12-13 06:56:59.105856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:35.681 [2024-12-13 06:56:59.105893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:10344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.681 [2024-12-13 06:56:59.105908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:35.681 [2024-12-13 06:56:59.105929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:10376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.681 [2024-12-13 06:56:59.105944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:35.681 [2024-12-13 06:56:59.105964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:10392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.681 [2024-12-13 06:56:59.105979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:35.681 [2024-12-13 06:56:59.105999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:10408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.681 [2024-12-13 06:56:59.106014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:35.681 [2024-12-13 06:56:59.106034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:10424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.681 [2024-12-13 06:56:59.106049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:35.681 [2024-12-13 06:56:59.106070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:10432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.681 [2024-12-13 06:56:59.106085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:35.681 [2024-12-13 06:56:59.106105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.681 [2024-12-13 06:56:59.106120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:35.681 [2024-12-13 06:56:59.106141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:11040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.681 [2024-12-13 06:56:59.106156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:35.681 [2024-12-13 06:56:59.106193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:11048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.681 [2024-12-13 06:56:59.106213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:35.681 [2024-12-13 06:56:59.106234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.681 [2024-12-13 06:56:59.106250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:35.681 [2024-12-13 06:56:59.106271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:11064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.681 [2024-12-13 06:56:59.106286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:35.681 [2024-12-13 06:56:59.106315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:11072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.681 [2024-12-13 06:56:59.106332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:35.681 [2024-12-13 06:56:59.106352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:11080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.681 [2024-12-13 06:56:59.106367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:35.681 [2024-12-13 06:56:59.106401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:11088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.681 [2024-12-13 06:56:59.106419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:35.681 [2024-12-13 06:56:59.106440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:11096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.681 [2024-12-13 06:56:59.106455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:35.681 [2024-12-13 06:56:59.106477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:11104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.681 [2024-12-13 06:56:59.106492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:35.681 [2024-12-13 06:56:59.106512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:11112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.681 [2024-12-13 06:56:59.106527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:35.681 [2024-12-13 06:56:59.106549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:11120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.681 [2024-12-13 06:56:59.106609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:35.681 [2024-12-13 06:56:59.106635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:11128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.681 [2024-12-13 06:56:59.106651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:35.681 [2024-12-13 06:56:59.106673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:11136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.681 [2024-12-13 06:56:59.106689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:35.681 [2024-12-13 06:56:59.106711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:11144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.681 [2024-12-13 06:56:59.106727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:35.681 [2024-12-13 06:56:59.106749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:11152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.681 [2024-12-13 06:56:59.106764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:35.681 [2024-12-13 06:56:59.106786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:10456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.681 [2024-12-13 06:56:59.106802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.681 [2024-12-13 06:56:59.106823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:10472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.681 [2024-12-13 06:56:59.106848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:35.681 [2024-12-13 06:56:59.106871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:10480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.681 [2024-12-13 06:56:59.106887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:35.681 [2024-12-13 06:56:59.106924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:10488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.681 [2024-12-13 06:56:59.106940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:35.681 [2024-12-13 06:56:59.106976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:10496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.681 [2024-12-13 06:56:59.106991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:35.681 [2024-12-13 06:56:59.107012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:10520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.681 [2024-12-13 06:56:59.107027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:35.681 [2024-12-13 06:56:59.107047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:10528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.681 [2024-12-13 06:56:59.107063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:35.681 [2024-12-13 06:56:59.107083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.681 [2024-12-13 06:56:59.107098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:35.681 [2024-12-13 06:56:59.107118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.681 [2024-12-13 06:56:59.107133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:35.681 [2024-12-13 06:56:59.107153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:11168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.681 [2024-12-13 06:56:59.107168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:35.681 [2024-12-13 06:56:59.107188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:11176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.681 [2024-12-13 06:56:59.107203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:35.681 [2024-12-13 06:56:59.107224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.682 [2024-12-13 06:56:59.107239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:35.682 [2024-12-13 06:56:59.107259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:11192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.682 [2024-12-13 06:56:59.107274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:35.682 [2024-12-13 06:56:59.107295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:11200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.682 [2024-12-13 06:56:59.107316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:35.682 [2024-12-13 06:56:59.107338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:11208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.682 [2024-12-13 06:56:59.107353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:35.682 [2024-12-13 06:56:59.107374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:10576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.682 [2024-12-13 06:56:59.107389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:35.682 [2024-12-13 06:56:59.107420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:10584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.682 [2024-12-13 06:56:59.107455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:35.682 [2024-12-13 06:56:59.107477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.682 [2024-12-13 06:56:59.107493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:35.682 [2024-12-13 06:56:59.107514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:10616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.682 [2024-12-13 06:56:59.107530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:35.682 [2024-12-13 06:56:59.107592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.682 [2024-12-13 06:56:59.107619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:35.682 [2024-12-13 06:56:59.107645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:10648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.682 [2024-12-13 06:56:59.107662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:35.682 [2024-12-13 06:56:59.108626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:10688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.682 [2024-12-13 06:56:59.108656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:35.682 [2024-12-13 06:56:59.108692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.682 [2024-12-13 06:56:59.108710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:35.682 [2024-12-13 06:56:59.108755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:11216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.682 [2024-12-13 06:56:59.108772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:35.682 [2024-12-13 06:56:59.108801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.682 [2024-12-13 06:56:59.108817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:35.682 [2024-12-13 06:56:59.108846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:11232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.682 [2024-12-13 06:56:59.108862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:35.682 [2024-12-13 06:56:59.108920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:11240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.682 [2024-12-13 06:56:59.108938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:35.682 [2024-12-13 06:56:59.108967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:11248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.682 [2024-12-13 06:56:59.108982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:35.682 [2024-12-13 06:56:59.109011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:11256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.682 [2024-12-13 06:56:59.109026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:35.682 [2024-12-13 06:56:59.109055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:11264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.682 [2024-12-13 06:56:59.109070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:35.682 [2024-12-13 06:56:59.109099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.682 [2024-12-13 06:56:59.109114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:35.682 [2024-12-13 06:56:59.109142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:11280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.682 [2024-12-13 06:56:59.109158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:35.682 [2024-12-13 06:56:59.109186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:11288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.682 [2024-12-13 06:56:59.109201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.682 [2024-12-13 06:56:59.109229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.682 [2024-12-13 06:56:59.109245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:35.682 [2024-12-13 06:56:59.109274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:11304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.682 [2024-12-13 06:56:59.109290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:35.682 [2024-12-13 06:56:59.109334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:11312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.682 [2024-12-13 06:56:59.109353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:35.682 [2024-12-13 06:56:59.109383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:11320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.682 [2024-12-13 06:56:59.109430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:35.682 [2024-12-13 06:56:59.109462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:11328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.682 [2024-12-13 06:56:59.109479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:35.682 [2024-12-13 06:56:59.109518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:11336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.682 [2024-12-13 06:56:59.109536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:35.682 [2024-12-13 06:56:59.109574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:11344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.682 [2024-12-13 06:56:59.109598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:35.682 [2024-12-13 06:56:59.109629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:11352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.682 [2024-12-13 06:56:59.109646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:35.682 [2024-12-13 06:56:59.109675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:11360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.682 [2024-12-13 06:56:59.109691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:35.682 [2024-12-13 06:56:59.109721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:11368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.682 [2024-12-13 06:56:59.109738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:35.682 [2024-12-13 06:56:59.109782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:11376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.682 [2024-12-13 06:56:59.109797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:35.682 [2024-12-13 06:56:59.109825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:11384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.682 [2024-12-13 06:56:59.109841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:35.682 [2024-12-13 06:56:59.109869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:11392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.682 [2024-12-13 06:56:59.109885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:35.682 [2024-12-13 06:56:59.109913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:11400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.682 [2024-12-13 06:56:59.109928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:35.682 [2024-12-13 06:56:59.109957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:11408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.682 [2024-12-13 06:56:59.109972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:35.682 [2024-12-13 06:57:06.180260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:117480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.682 [2024-12-13 06:57:06.180372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:35.682 [2024-12-13 06:57:06.180430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:117488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.683 [2024-12-13 06:57:06.180450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:35.683 [2024-12-13 06:57:06.180473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:117496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.683 [2024-12-13 06:57:06.180508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:35.683 [2024-12-13 06:57:06.180531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:117504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.683 [2024-12-13 06:57:06.180546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:35.683 [2024-12-13 06:57:06.180567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:117512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.683 [2024-12-13 06:57:06.180582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:35.683 [2024-12-13 06:57:06.180602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:117520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.683 [2024-12-13 06:57:06.180617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:35.683 [2024-12-13 06:57:06.180637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:117528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.683 [2024-12-13 06:57:06.180652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.683 [2024-12-13 06:57:06.180672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:117536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.683 [2024-12-13 06:57:06.180687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:35.683 [2024-12-13 06:57:06.180707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:117544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.683 [2024-12-13 06:57:06.180722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:35.683 [2024-12-13 06:57:06.180741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:117552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.683 [2024-12-13 06:57:06.180756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:35.683 [2024-12-13 06:57:06.180776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:117560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.683 [2024-12-13 06:57:06.180791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:35.683 [2024-12-13 06:57:06.180811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:117568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.683 [2024-12-13 06:57:06.180825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:35.683 [2024-12-13 06:57:06.180845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:117576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.683 [2024-12-13 06:57:06.180859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:35.683 [2024-12-13 06:57:06.180879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:117584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.683 [2024-12-13 06:57:06.180894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:35.683 [2024-12-13 06:57:06.180915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:116888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.683 [2024-12-13 06:57:06.180938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:35.683 [2024-12-13 06:57:06.180961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:116896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.683 [2024-12-13 06:57:06.180976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:35.683 [2024-12-13 06:57:06.180996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:116904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.683 [2024-12-13 06:57:06.181011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:35.683 [2024-12-13 06:57:06.181032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:116912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.683 [2024-12-13 06:57:06.181047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:35.683 [2024-12-13 06:57:06.181068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:116920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.683 [2024-12-13 06:57:06.181083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:35.683 [2024-12-13 06:57:06.181104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:116960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.683 [2024-12-13 06:57:06.181119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:35.683 [2024-12-13 06:57:06.181139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:116968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.683 [2024-12-13 06:57:06.181154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:35.683 [2024-12-13 06:57:06.181174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:117000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.683 [2024-12-13 06:57:06.181189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:35.683 [2024-12-13 06:57:06.181228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:117592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.683 [2024-12-13 06:57:06.181248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:35.683 [2024-12-13 06:57:06.181269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:117600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.683 [2024-12-13 06:57:06.181285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:35.683 [2024-12-13 06:57:06.181305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:117608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.683 [2024-12-13 06:57:06.181320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:35.683 [2024-12-13 06:57:06.181340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:117616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.683 [2024-12-13 06:57:06.181386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:35.683 [2024-12-13 06:57:06.181411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:117624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.683 [2024-12-13 06:57:06.181437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:35.683 [2024-12-13 06:57:06.181460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:117632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.683 [2024-12-13 06:57:06.181476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:35.683 [2024-12-13 06:57:06.181497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:117640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.683 [2024-12-13 06:57:06.181512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:35.683 [2024-12-13 06:57:06.181534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:117648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.683 [2024-12-13 06:57:06.181549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:35.683 [2024-12-13 06:57:06.181570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:117656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.683 [2024-12-13 06:57:06.181585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:35.683 [2024-12-13 06:57:06.181606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:117664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.683 [2024-12-13 06:57:06.181622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:35.683 [2024-12-13 06:57:06.181642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:117672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.683 [2024-12-13 06:57:06.181658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:35.683 [2024-12-13 06:57:06.181679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.683 [2024-12-13 06:57:06.181695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:35.683 [2024-12-13 06:57:06.181717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:117688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.683 [2024-12-13 06:57:06.181732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:35.683 [2024-12-13 06:57:06.181754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:117696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.683 [2024-12-13 06:57:06.181769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:35.683 [2024-12-13 06:57:06.181791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:117704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.683 [2024-12-13 06:57:06.181806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:35.683 [2024-12-13 06:57:06.181827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:117008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.683 [2024-12-13 06:57:06.181842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:35.683 [2024-12-13 06:57:06.181878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:117016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.683 [2024-12-13 06:57:06.181893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.683 [2024-12-13 06:57:06.181921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:117024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.684 [2024-12-13 06:57:06.181937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:35.684 [2024-12-13 06:57:06.181958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:117040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.684 [2024-12-13 06:57:06.181973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:35.684 [2024-12-13 06:57:06.181994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:117056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.684 [2024-12-13 06:57:06.182009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:35.684 [2024-12-13 06:57:06.182029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:117064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.684 [2024-12-13 06:57:06.182044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:35.684 [2024-12-13 06:57:06.182064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:117088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.684 [2024-12-13 06:57:06.182079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:35.684 [2024-12-13 06:57:06.182100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:117104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.684 [2024-12-13 06:57:06.182115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:35.684 [2024-12-13 06:57:06.182149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:117712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.684 [2024-12-13 06:57:06.182168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:35.684 [2024-12-13 06:57:06.182188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:117720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.684 [2024-12-13 06:57:06.182204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:35.684 [2024-12-13 06:57:06.182225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:117728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.684 [2024-12-13 06:57:06.182239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:35.684 [2024-12-13 06:57:06.182260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:117736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.684 [2024-12-13 06:57:06.182275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:35.684 [2024-12-13 06:57:06.182295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:117744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.684 [2024-12-13 06:57:06.182310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:35.684 [2024-12-13 06:57:06.182332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:117752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.684 [2024-12-13 06:57:06.182347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:35.684 [2024-12-13 06:57:06.182402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:117760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.684 [2024-12-13 06:57:06.182423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:35.684 [2024-12-13 06:57:06.182446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:117768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.684 [2024-12-13 06:57:06.182462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:35.684 [2024-12-13 06:57:06.182500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:117776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.684 [2024-12-13 06:57:06.182516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:35.684 [2024-12-13 06:57:06.182538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:117784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.684 [2024-12-13 06:57:06.182554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:35.684 [2024-12-13 06:57:06.182575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:117792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.684 [2024-12-13 06:57:06.182591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:35.684 [2024-12-13 06:57:06.182613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:117800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.684 [2024-12-13 06:57:06.182629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:35.684 [2024-12-13 06:57:06.182651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:117808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.684 [2024-12-13 06:57:06.182667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:35.684 [2024-12-13 06:57:06.182688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:117112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.684 [2024-12-13 06:57:06.182704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:35.684 [2024-12-13 06:57:06.182726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:117128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.684 [2024-12-13 06:57:06.182741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:35.684 [2024-12-13 06:57:06.182763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:117144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.684 [2024-12-13 06:57:06.182792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:35.684 [2024-12-13 06:57:06.182813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:117160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.684 [2024-12-13 06:57:06.182828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:35.684 [2024-12-13 06:57:06.182849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:117192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.684 [2024-12-13 06:57:06.182864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:35.684 [2024-12-13 06:57:06.182885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:117200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.684 [2024-12-13 06:57:06.182908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:35.684 [2024-12-13 06:57:06.182930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:117216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.684 [2024-12-13 06:57:06.182946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:35.684 [2024-12-13 06:57:06.182967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:117240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.684 [2024-12-13 06:57:06.182983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:35.684 [2024-12-13 06:57:06.183004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:117816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.684 [2024-12-13 06:57:06.183019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:35.684 [2024-12-13 06:57:06.183041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:117824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.684 [2024-12-13 06:57:06.183056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:35.684 [2024-12-13 06:57:06.183077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:117832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.684 [2024-12-13 06:57:06.183092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:35.684 [2024-12-13 06:57:06.183113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:117840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.684 [2024-12-13 06:57:06.183129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:35.684 [2024-12-13 06:57:06.183150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:117848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.684 [2024-12-13 06:57:06.183166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.684 [2024-12-13 06:57:06.183187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:117856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.684 [2024-12-13 06:57:06.183202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:35.684 [2024-12-13 06:57:06.183223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:117864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.684 [2024-12-13 06:57:06.183238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:35.684 [2024-12-13 06:57:06.183259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:117872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.684 [2024-12-13 06:57:06.183275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:35.684 [2024-12-13 06:57:06.183295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:117880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.684 [2024-12-13 06:57:06.183311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:35.684 [2024-12-13 06:57:06.183332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:117888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.684 [2024-12-13 06:57:06.183354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:35.684 [2024-12-13 06:57:06.183376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:117896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.684 [2024-12-13 06:57:06.183406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:35.684 [2024-12-13 06:57:06.183429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:117904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.685 [2024-12-13 06:57:06.183445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:35.685 [2024-12-13 06:57:06.183466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:117912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.685 [2024-12-13 06:57:06.183482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:35.685 [2024-12-13 06:57:06.183503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:117920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.685 [2024-12-13 06:57:06.183519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:35.685 [2024-12-13 06:57:06.183540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:117928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.685 [2024-12-13 06:57:06.183555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:35.685 [2024-12-13 06:57:06.183576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:117936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.685 [2024-12-13 06:57:06.183591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:35.685 [2024-12-13 06:57:06.183612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:117944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.685 [2024-12-13 06:57:06.183627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:35.685 [2024-12-13 06:57:06.183649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:117952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.685 [2024-12-13 06:57:06.183665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:35.685 [2024-12-13 06:57:06.183694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:117960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.685 [2024-12-13 06:57:06.183711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:35.685 [2024-12-13 06:57:06.183732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:117264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.685 [2024-12-13 06:57:06.183748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:35.685 [2024-12-13 06:57:06.183770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:117272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.685 [2024-12-13 06:57:06.183786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:35.685 [2024-12-13 06:57:06.183807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:117288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.685 [2024-12-13 06:57:06.183833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:35.685 [2024-12-13 06:57:06.183857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:117296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.685 [2024-12-13 06:57:06.183901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:35.685 [2024-12-13 06:57:06.183925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:117312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.685 [2024-12-13 06:57:06.183942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:35.685 [2024-12-13 06:57:06.183964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:117320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.685 [2024-12-13 06:57:06.183981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:35.685 [2024-12-13 06:57:06.184004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:117336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.685 [2024-12-13 06:57:06.184020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:35.685 [2024-12-13 06:57:06.184042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:117344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.685 [2024-12-13 06:57:06.184059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:35.685 [2024-12-13 06:57:06.184081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:117968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.685 [2024-12-13 06:57:06.184097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:35.685 [2024-12-13 06:57:06.184119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:117976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.685 [2024-12-13 06:57:06.184135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:35.685 [2024-12-13 06:57:06.184157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:117984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.685 [2024-12-13 06:57:06.184173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:35.685 [2024-12-13 06:57:06.184209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:117992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.685 [2024-12-13 06:57:06.184225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:35.685 [2024-12-13 06:57:06.184262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:118000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.685 [2024-12-13 06:57:06.184277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:35.685 [2024-12-13 06:57:06.184297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:118008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.685 [2024-12-13 06:57:06.184312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:35.685 [2024-12-13 06:57:06.184333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:118016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.685 [2024-12-13 06:57:06.184349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:35.685 [2024-12-13 06:57:06.184378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:118024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.685 [2024-12-13 06:57:06.184406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:35.685 [2024-12-13 06:57:06.184431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:118032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.685 [2024-12-13 06:57:06.184447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.685 [2024-12-13 06:57:06.184468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:118040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.685 [2024-12-13 06:57:06.184486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.685 [2024-12-13 06:57:06.184508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:118048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.685 [2024-12-13 06:57:06.184523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:35.685 [2024-12-13 06:57:06.184544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:118056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.685 [2024-12-13 06:57:06.184560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:35.685 [2024-12-13 06:57:06.184581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:117360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.685 [2024-12-13 06:57:06.184596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:35.685 [2024-12-13 06:57:06.184617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:117368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.685 [2024-12-13 06:57:06.184632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:35.685 [2024-12-13 06:57:06.184653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:117384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.685 [2024-12-13 06:57:06.184669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:35.685 [2024-12-13 06:57:06.184690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:117392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.685 [2024-12-13 06:57:06.184705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:35.685 [2024-12-13 06:57:06.184726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:117416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.685 [2024-12-13 06:57:06.184742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:35.685 [2024-12-13 06:57:06.184763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:117432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.685 [2024-12-13 06:57:06.184778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:35.686 [2024-12-13 06:57:06.184799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:117440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.686 [2024-12-13 06:57:06.184814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:35.686 [2024-12-13 06:57:06.185648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:117472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.686 [2024-12-13 06:57:06.185676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:35.686 [2024-12-13 06:57:06.185711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:118064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.686 [2024-12-13 06:57:06.185729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:35.686 [2024-12-13 06:57:06.185759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:118072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.686 [2024-12-13 06:57:06.185775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:35.686 [2024-12-13 06:57:06.185804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:118080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.686 [2024-12-13 06:57:06.185820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:35.686 [2024-12-13 06:57:06.185849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:118088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.686 [2024-12-13 06:57:06.185865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:35.686 [2024-12-13 06:57:06.185895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:118096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.686 [2024-12-13 06:57:06.185911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:35.686 [2024-12-13 06:57:06.185940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:118104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.686 [2024-12-13 06:57:06.185960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:35.686 [2024-12-13 06:57:06.185990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:118112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.686 [2024-12-13 06:57:06.186006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:35.686 [2024-12-13 06:57:06.186035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:118120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.686 [2024-12-13 06:57:06.186051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:35.686 [2024-12-13 06:57:06.186080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:118128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.686 [2024-12-13 06:57:06.186096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:35.686 [2024-12-13 06:57:06.186126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:118136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.686 [2024-12-13 06:57:06.186141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:35.686 [2024-12-13 06:57:06.186170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:118144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.686 [2024-12-13 06:57:06.186186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:35.686 [2024-12-13 06:57:06.186226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:118152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.686 [2024-12-13 06:57:06.186244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:35.686 [2024-12-13 06:57:06.186274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:118160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.686 [2024-12-13 06:57:06.186290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:35.686 [2024-12-13 06:57:06.186333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:118168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.686 [2024-12-13 06:57:06.186384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:35.686 [2024-12-13 06:57:06.186418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:118176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.686 [2024-12-13 06:57:06.186436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:35.686 [2024-12-13 06:57:19.724929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.686 [2024-12-13 06:57:19.724987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.686 [2024-12-13 06:57:19.725029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:1568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.686 [2024-12-13 06:57:19.725046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.686 [2024-12-13 06:57:19.725061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:1576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.686 [2024-12-13 06:57:19.725075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.686 [2024-12-13 06:57:19.725090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:1584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.686 [2024-12-13 06:57:19.725104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.686 [2024-12-13 06:57:19.725119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.686 [2024-12-13 06:57:19.725133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.686 [2024-12-13 06:57:19.725148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.686 [2024-12-13 06:57:19.725162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.686 [2024-12-13 06:57:19.725177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.686 [2024-12-13 06:57:19.725190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.686 [2024-12-13 06:57:19.725205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.686 [2024-12-13 06:57:19.725219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.686 [2024-12-13 06:57:19.725234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:1016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.686 [2024-12-13 06:57:19.725269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.686 [2024-12-13 06:57:19.725287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:1024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.686 [2024-12-13 06:57:19.725301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.686 [2024-12-13 06:57:19.725316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:1032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.686 [2024-12-13 06:57:19.725329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.686 [2024-12-13 06:57:19.725344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:1072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.686 [2024-12-13 06:57:19.725358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.686 [2024-12-13 06:57:19.725372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:1592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.686 [2024-12-13 06:57:19.725401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.686 [2024-12-13 06:57:19.725417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:1600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.686 [2024-12-13 06:57:19.725432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.686 [2024-12-13 06:57:19.725464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.686 [2024-12-13 06:57:19.725478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.686 [2024-12-13 06:57:19.725493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:1616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.686 [2024-12-13 06:57:19.725507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.686 [2024-12-13 06:57:19.725522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:1648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.686 [2024-12-13 06:57:19.725538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.686 [2024-12-13 06:57:19.725554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:1672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.686 [2024-12-13 06:57:19.725568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.686 [2024-12-13 06:57:19.725583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:1680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.686 [2024-12-13 06:57:19.725597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.686 [2024-12-13 06:57:19.725612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.686 [2024-12-13 06:57:19.725642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.686 [2024-12-13 06:57:19.725658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:1696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.686 [2024-12-13 06:57:19.725672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.686 [2024-12-13 06:57:19.725688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.686 [2024-12-13 06:57:19.725711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.687 [2024-12-13 06:57:19.725728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.687 [2024-12-13 06:57:19.725743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.687 [2024-12-13 06:57:19.725759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:1080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.687 [2024-12-13 06:57:19.725773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.687 [2024-12-13 06:57:19.725789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:1088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.687 [2024-12-13 06:57:19.725804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.687 [2024-12-13 06:57:19.725820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:1096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.687 [2024-12-13 06:57:19.725849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.687 [2024-12-13 06:57:19.725865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:1104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.687 [2024-12-13 06:57:19.725879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.687 [2024-12-13 06:57:19.725894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:1112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.687 [2024-12-13 06:57:19.725908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.687 [2024-12-13 06:57:19.725923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.687 [2024-12-13 06:57:19.725937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.687 [2024-12-13 06:57:19.725952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.687 [2024-12-13 06:57:19.725966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.687 [2024-12-13 06:57:19.725981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:1152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.687 [2024-12-13 06:57:19.725995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.687 [2024-12-13 06:57:19.726010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:1720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.687 [2024-12-13 06:57:19.726024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.687 [2024-12-13 06:57:19.726039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.687 [2024-12-13 06:57:19.726055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.687 [2024-12-13 06:57:19.726070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:1736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.687 [2024-12-13 06:57:19.726084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.687 [2024-12-13 06:57:19.726106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:1744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.687 [2024-12-13 06:57:19.726121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.687 [2024-12-13 06:57:19.726136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:1752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.687 [2024-12-13 06:57:19.726150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.687 [2024-12-13 06:57:19.726165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.687 [2024-12-13 06:57:19.726179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.687 [2024-12-13 06:57:19.726194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:1768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.687 [2024-12-13 06:57:19.726208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.687 [2024-12-13 06:57:19.726223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.687 [2024-12-13 06:57:19.726237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.687 [2024-12-13 06:57:19.726252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:1784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.687 [2024-12-13 06:57:19.726266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.687 [2024-12-13 06:57:19.726281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:1792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.687 [2024-12-13 06:57:19.726295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.687 [2024-12-13 06:57:19.726310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.687 [2024-12-13 06:57:19.726324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.687 [2024-12-13 06:57:19.726339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:1808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.687 [2024-12-13 06:57:19.726353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.687 [2024-12-13 06:57:19.726368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:1816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.687 [2024-12-13 06:57:19.726382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.687 [2024-12-13 06:57:19.726408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.687 [2024-12-13 06:57:19.726424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.687 [2024-12-13 06:57:19.726440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.687 [2024-12-13 06:57:19.726454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.687 [2024-12-13 06:57:19.726469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:1840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.687 [2024-12-13 06:57:19.726490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.687 [2024-12-13 06:57:19.726506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.687 [2024-12-13 06:57:19.726520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.687 [2024-12-13 06:57:19.726536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:1856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.687 [2024-12-13 06:57:19.726551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.687 [2024-12-13 06:57:19.726566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:1864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.687 [2024-12-13 06:57:19.726580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.687 [2024-12-13 06:57:19.726595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:1872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.687 [2024-12-13 06:57:19.726609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.687 [2024-12-13 06:57:19.726624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:1176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.687 [2024-12-13 06:57:19.726638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.687 [2024-12-13 06:57:19.726653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.687 [2024-12-13 06:57:19.726667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.687 [2024-12-13 06:57:19.726682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:1224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.687 [2024-12-13 06:57:19.726696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.687 [2024-12-13 06:57:19.726711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:1248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.687 [2024-12-13 06:57:19.726725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.687 [2024-12-13 06:57:19.726740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:1264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.687 [2024-12-13 06:57:19.726754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.687 [2024-12-13 06:57:19.726769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:1272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.687 [2024-12-13 06:57:19.726783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.687 [2024-12-13 06:57:19.726798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:1280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.687 [2024-12-13 06:57:19.726811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.687 [2024-12-13 06:57:19.726826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:1296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.687 [2024-12-13 06:57:19.726840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.687 [2024-12-13 06:57:19.726855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:1880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.687 [2024-12-13 06:57:19.726875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.687 [2024-12-13 06:57:19.726891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.687 [2024-12-13 06:57:19.726905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.687 [2024-12-13 06:57:19.726921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:1896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.688 [2024-12-13 06:57:19.726934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.688 [2024-12-13 06:57:19.726950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.688 [2024-12-13 06:57:19.726964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.688 [2024-12-13 06:57:19.726979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:1912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.688 [2024-12-13 06:57:19.726993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.688 [2024-12-13 06:57:19.727008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:1920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.688 [2024-12-13 06:57:19.727022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.688 [2024-12-13 06:57:19.727037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:1928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.688 [2024-12-13 06:57:19.727051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.688 [2024-12-13 06:57:19.727066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:1936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.688 [2024-12-13 06:57:19.727080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.688 [2024-12-13 06:57:19.727095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:1944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.688 [2024-12-13 06:57:19.727109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.688 [2024-12-13 06:57:19.727124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:1952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.688 [2024-12-13 06:57:19.727154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.688 [2024-12-13 06:57:19.727187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.688 [2024-12-13 06:57:19.727202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.688 [2024-12-13 06:57:19.727218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.688 [2024-12-13 06:57:19.727233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.688 [2024-12-13 06:57:19.727249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:1976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.688 [2024-12-13 06:57:19.727264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.688 [2024-12-13 06:57:19.727286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:1984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.688 [2024-12-13 06:57:19.727302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.688 [2024-12-13 06:57:19.727318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:1992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.688 [2024-12-13 06:57:19.727333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.688 [2024-12-13 06:57:19.727349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:2000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.688 [2024-12-13 06:57:19.727364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.688 [2024-12-13 06:57:19.727392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.688 [2024-12-13 06:57:19.727410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.688 [2024-12-13 06:57:19.727427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:1312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.688 [2024-12-13 06:57:19.727442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.688 [2024-12-13 06:57:19.727459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:1320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.688 [2024-12-13 06:57:19.727474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.688 [2024-12-13 06:57:19.727490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:1328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.688 [2024-12-13 06:57:19.727505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.688 [2024-12-13 06:57:19.727521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:1336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.688 [2024-12-13 06:57:19.727536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.688 [2024-12-13 06:57:19.727552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:1384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.688 [2024-12-13 06:57:19.727568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.688 [2024-12-13 06:57:19.727584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.688 [2024-12-13 06:57:19.727599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.688 [2024-12-13 06:57:19.727615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:1400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.688 [2024-12-13 06:57:19.727630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.688 [2024-12-13 06:57:19.727646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:1408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.688 [2024-12-13 06:57:19.727661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.688 [2024-12-13 06:57:19.727677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.688 [2024-12-13 06:57:19.727700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.688 [2024-12-13 06:57:19.727717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:2024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.688 [2024-12-13 06:57:19.727732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.688 [2024-12-13 06:57:19.727748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.688 [2024-12-13 06:57:19.727763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.688 [2024-12-13 06:57:19.727793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:2040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.688 [2024-12-13 06:57:19.727823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.688 [2024-12-13 06:57:19.727838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:2048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.688 [2024-12-13 06:57:19.727852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.688 [2024-12-13 06:57:19.727892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:2056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.688 [2024-12-13 06:57:19.727927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.688 [2024-12-13 06:57:19.727944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:1424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.688 [2024-12-13 06:57:19.727965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.688 [2024-12-13 06:57:19.727982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:1448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.688 [2024-12-13 06:57:19.727997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.688 [2024-12-13 06:57:19.728014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:1456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.688 [2024-12-13 06:57:19.728029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.688 [2024-12-13 06:57:19.728045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:1464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.688 [2024-12-13 06:57:19.728060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.688 [2024-12-13 06:57:19.728076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:1472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.688 [2024-12-13 06:57:19.728091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.688 [2024-12-13 06:57:19.728107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:1480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.688 [2024-12-13 06:57:19.728122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.688 [2024-12-13 06:57:19.728138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:1488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.688 [2024-12-13 06:57:19.728153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.688 [2024-12-13 06:57:19.728169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:1504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.688 [2024-12-13 06:57:19.728199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.688 [2024-12-13 06:57:19.728218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.688 [2024-12-13 06:57:19.728248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.688 [2024-12-13 06:57:19.728264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.688 [2024-12-13 06:57:19.728293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.688 [2024-12-13 06:57:19.728308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.688 [2024-12-13 06:57:19.728322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.689 [2024-12-13 06:57:19.728338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:2088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.689 [2024-12-13 06:57:19.728352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.689 [2024-12-13 06:57:19.728379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:2096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.689 [2024-12-13 06:57:19.728405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.689 [2024-12-13 06:57:19.728423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.689 [2024-12-13 06:57:19.728439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.689 [2024-12-13 06:57:19.728454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.689 [2024-12-13 06:57:19.728469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.689 [2024-12-13 06:57:19.728485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:2120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.689 [2024-12-13 06:57:19.728499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.689 [2024-12-13 06:57:19.728515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.689 [2024-12-13 06:57:19.728531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.689 [2024-12-13 06:57:19.728547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:2136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.689 [2024-12-13 06:57:19.728562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.689 [2024-12-13 06:57:19.728578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.689 [2024-12-13 06:57:19.728592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.689 [2024-12-13 06:57:19.728608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.689 [2024-12-13 06:57:19.728622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.689 [2024-12-13 06:57:19.728651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:2160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.689 [2024-12-13 06:57:19.728668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.689 [2024-12-13 06:57:19.728684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:2168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.689 [2024-12-13 06:57:19.728699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.689 [2024-12-13 06:57:19.728714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:2176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.689 [2024-12-13 06:57:19.728728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.689 [2024-12-13 06:57:19.728760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:2184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.689 [2024-12-13 06:57:19.728774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.689 [2024-12-13 06:57:19.728789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:2192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.689 [2024-12-13 06:57:19.728803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.689 [2024-12-13 06:57:19.728818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:2200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.689 [2024-12-13 06:57:19.728833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.689 [2024-12-13 06:57:19.728848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:2208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.689 [2024-12-13 06:57:19.728862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.689 [2024-12-13 06:57:19.728881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:2216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.689 [2024-12-13 06:57:19.728896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.689 [2024-12-13 06:57:19.728911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.689 [2024-12-13 06:57:19.728925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.689 [2024-12-13 06:57:19.728941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:2232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.689 [2024-12-13 06:57:19.728954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.689 [2024-12-13 06:57:19.728970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:2240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.689 [2024-12-13 06:57:19.728983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.689 [2024-12-13 06:57:19.728999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.689 [2024-12-13 06:57:19.729013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.689 [2024-12-13 06:57:19.729028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:2256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.689 [2024-12-13 06:57:19.729055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.689 [2024-12-13 06:57:19.729072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:1624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.689 [2024-12-13 06:57:19.729087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.689 [2024-12-13 06:57:19.729102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:1632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.689 [2024-12-13 06:57:19.729116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.689 [2024-12-13 06:57:19.729131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:1640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.689 [2024-12-13 06:57:19.729145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.689 [2024-12-13 06:57:19.729160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:1656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.689 [2024-12-13 06:57:19.729175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.689 [2024-12-13 06:57:19.729189] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e2b810 is same with the state(5) to be set 00:18:35.689 [2024-12-13 06:57:19.729206] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:35.689 [2024-12-13 06:57:19.729216] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:35.689 [2024-12-13 06:57:19.729227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1664 len:8 PRP1 0x0 PRP2 0x0 00:18:35.689 [2024-12-13 06:57:19.729240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.689 [2024-12-13 06:57:19.729285] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e2b810 was disconnected and freed. reset controller. 00:18:35.689 [2024-12-13 06:57:19.729388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:35.689 [2024-12-13 06:57:19.729414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.689 [2024-12-13 06:57:19.729429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:35.689 [2024-12-13 06:57:19.729459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.689 [2024-12-13 06:57:19.729474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:35.689 [2024-12-13 06:57:19.729487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.689 [2024-12-13 06:57:19.729504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:35.689 [2024-12-13 06:57:19.729518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.689 [2024-12-13 06:57:19.729532] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db4f30 is same with the state(5) to be set 00:18:35.689 [2024-12-13 06:57:19.730567] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.689 [2024-12-13 06:57:19.730607] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1db4f30 (9): Bad file descriptor 00:18:35.689 [2024-12-13 06:57:19.730909] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.689 [2024-12-13 06:57:19.730997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.689 [2024-12-13 06:57:19.731049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.689 [2024-12-13 06:57:19.731071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1db4f30 with addr=10.0.0.2, port=4421 00:18:35.690 [2024-12-13 06:57:19.731088] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db4f30 is same with the state(5) to be set 00:18:35.690 [2024-12-13 06:57:19.731157] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1db4f30 (9): Bad file descriptor 00:18:35.690 [2024-12-13 06:57:19.731199] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.690 [2024-12-13 06:57:19.731216] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.690 [2024-12-13 06:57:19.731231] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.690 [2024-12-13 06:57:19.731264] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.690 [2024-12-13 06:57:19.731282] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.690 [2024-12-13 06:57:29.775428] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:35.690 Received shutdown signal, test time was about 55.541677 seconds 00:18:35.690 00:18:35.690 Latency(us) 00:18:35.690 [2024-12-13T06:57:40.209Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:35.690 [2024-12-13T06:57:40.209Z] Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:35.690 Verification LBA range: start 0x0 length 0x4000 00:18:35.690 Nvme0n1 : 55.54 11161.75 43.60 0.00 0.00 11448.01 118.23 7015926.69 00:18:35.690 [2024-12-13T06:57:40.209Z] =================================================================================================================== 00:18:35.690 [2024-12-13T06:57:40.209Z] Total : 11161.75 43.60 0.00 0.00 11448.01 118.23 7015926.69 00:18:35.690 06:57:40 -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:35.949 06:57:40 -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:18:35.949 06:57:40 -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:35.949 06:57:40 -- host/multipath.sh@125 -- # nvmftestfini 00:18:35.949 06:57:40 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:35.949 06:57:40 -- nvmf/common.sh@116 -- # sync 00:18:35.949 06:57:40 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:35.949 06:57:40 -- nvmf/common.sh@119 -- # set +e 00:18:35.949 06:57:40 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:35.949 06:57:40 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:35.949 rmmod nvme_tcp 00:18:35.949 rmmod nvme_fabrics 00:18:35.949 rmmod nvme_keyring 00:18:35.949 06:57:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:35.949 06:57:40 -- nvmf/common.sh@123 -- # set -e 00:18:35.949 06:57:40 -- nvmf/common.sh@124 -- # return 0 00:18:35.949 06:57:40 -- nvmf/common.sh@477 -- # '[' -n 84168 ']' 00:18:35.949 06:57:40 -- nvmf/common.sh@478 -- # killprocess 84168 00:18:35.949 06:57:40 -- common/autotest_common.sh@936 -- # '[' -z 84168 ']' 00:18:35.949 06:57:40 -- common/autotest_common.sh@940 -- # kill -0 84168 00:18:35.949 06:57:40 -- common/autotest_common.sh@941 -- # uname 00:18:35.949 06:57:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:35.949 06:57:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84168 00:18:35.949 06:57:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:35.949 06:57:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:35.949 killing process with pid 84168 00:18:35.949 06:57:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84168' 00:18:35.949 06:57:40 -- common/autotest_common.sh@955 -- # kill 84168 00:18:35.949 06:57:40 -- common/autotest_common.sh@960 -- # wait 84168 00:18:36.208 06:57:40 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:36.208 06:57:40 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:36.208 06:57:40 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:36.208 06:57:40 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:36.208 06:57:40 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:36.208 06:57:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:36.208 06:57:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:36.208 06:57:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:36.208 06:57:40 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:36.208 00:18:36.208 real 1m1.405s 00:18:36.208 user 2m50.315s 00:18:36.208 sys 0m18.194s 00:18:36.208 06:57:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:36.208 06:57:40 -- common/autotest_common.sh@10 -- # set +x 00:18:36.208 ************************************ 00:18:36.208 END TEST nvmf_multipath 00:18:36.208 ************************************ 00:18:36.208 06:57:40 -- nvmf/nvmf.sh@117 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:18:36.208 06:57:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:36.208 06:57:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:36.208 06:57:40 -- common/autotest_common.sh@10 -- # set +x 00:18:36.208 ************************************ 00:18:36.208 START TEST nvmf_timeout 00:18:36.208 ************************************ 00:18:36.208 06:57:40 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:18:36.467 * Looking for test storage... 00:18:36.467 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:36.467 06:57:40 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:36.467 06:57:40 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:36.467 06:57:40 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:36.467 06:57:40 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:36.467 06:57:40 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:36.467 06:57:40 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:36.467 06:57:40 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:36.467 06:57:40 -- scripts/common.sh@335 -- # IFS=.-: 00:18:36.467 06:57:40 -- scripts/common.sh@335 -- # read -ra ver1 00:18:36.467 06:57:40 -- scripts/common.sh@336 -- # IFS=.-: 00:18:36.467 06:57:40 -- scripts/common.sh@336 -- # read -ra ver2 00:18:36.467 06:57:40 -- scripts/common.sh@337 -- # local 'op=<' 00:18:36.467 06:57:40 -- scripts/common.sh@339 -- # ver1_l=2 00:18:36.467 06:57:40 -- scripts/common.sh@340 -- # ver2_l=1 00:18:36.467 06:57:40 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:36.467 06:57:40 -- scripts/common.sh@343 -- # case "$op" in 00:18:36.467 06:57:40 -- scripts/common.sh@344 -- # : 1 00:18:36.467 06:57:40 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:36.467 06:57:40 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:36.467 06:57:40 -- scripts/common.sh@364 -- # decimal 1 00:18:36.467 06:57:40 -- scripts/common.sh@352 -- # local d=1 00:18:36.467 06:57:40 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:36.467 06:57:40 -- scripts/common.sh@354 -- # echo 1 00:18:36.467 06:57:40 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:36.467 06:57:40 -- scripts/common.sh@365 -- # decimal 2 00:18:36.467 06:57:40 -- scripts/common.sh@352 -- # local d=2 00:18:36.467 06:57:40 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:36.467 06:57:40 -- scripts/common.sh@354 -- # echo 2 00:18:36.467 06:57:40 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:36.467 06:57:40 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:36.468 06:57:40 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:36.468 06:57:40 -- scripts/common.sh@367 -- # return 0 00:18:36.468 06:57:40 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:36.468 06:57:40 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:36.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.468 --rc genhtml_branch_coverage=1 00:18:36.468 --rc genhtml_function_coverage=1 00:18:36.468 --rc genhtml_legend=1 00:18:36.468 --rc geninfo_all_blocks=1 00:18:36.468 --rc geninfo_unexecuted_blocks=1 00:18:36.468 00:18:36.468 ' 00:18:36.468 06:57:40 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:36.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.468 --rc genhtml_branch_coverage=1 00:18:36.468 --rc genhtml_function_coverage=1 00:18:36.468 --rc genhtml_legend=1 00:18:36.468 --rc geninfo_all_blocks=1 00:18:36.468 --rc geninfo_unexecuted_blocks=1 00:18:36.468 00:18:36.468 ' 00:18:36.468 06:57:40 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:36.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.468 --rc genhtml_branch_coverage=1 00:18:36.468 --rc genhtml_function_coverage=1 00:18:36.468 --rc genhtml_legend=1 00:18:36.468 --rc geninfo_all_blocks=1 00:18:36.468 --rc geninfo_unexecuted_blocks=1 00:18:36.468 00:18:36.468 ' 00:18:36.468 06:57:40 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:36.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.468 --rc genhtml_branch_coverage=1 00:18:36.468 --rc genhtml_function_coverage=1 00:18:36.468 --rc genhtml_legend=1 00:18:36.468 --rc geninfo_all_blocks=1 00:18:36.468 --rc geninfo_unexecuted_blocks=1 00:18:36.468 00:18:36.468 ' 00:18:36.468 06:57:40 -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:36.468 06:57:40 -- nvmf/common.sh@7 -- # uname -s 00:18:36.468 06:57:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:36.468 06:57:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:36.468 06:57:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:36.468 06:57:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:36.468 06:57:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:36.468 06:57:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:36.468 06:57:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:36.468 06:57:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:36.468 06:57:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:36.468 06:57:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:36.468 06:57:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:657f0c9c-3891-4064-9841-3d87a573b6e7 00:18:36.468 06:57:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=657f0c9c-3891-4064-9841-3d87a573b6e7 00:18:36.468 06:57:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:36.468 06:57:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:36.468 06:57:40 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:36.468 06:57:40 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:36.468 06:57:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:36.468 06:57:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:36.468 06:57:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:36.468 06:57:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.468 06:57:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.468 06:57:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.468 06:57:40 -- paths/export.sh@5 -- # export PATH 00:18:36.468 06:57:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.468 06:57:40 -- nvmf/common.sh@46 -- # : 0 00:18:36.468 06:57:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:36.468 06:57:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:36.468 06:57:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:36.468 06:57:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:36.468 06:57:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:36.468 06:57:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:36.468 06:57:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:36.468 06:57:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:36.468 06:57:40 -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:36.468 06:57:40 -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:36.468 06:57:40 -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:36.468 06:57:40 -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:36.468 06:57:40 -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:36.468 06:57:40 -- host/timeout.sh@19 -- # nvmftestinit 00:18:36.468 06:57:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:36.468 06:57:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:36.468 06:57:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:36.468 06:57:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:36.468 06:57:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:36.468 06:57:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:36.468 06:57:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:36.468 06:57:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:36.468 06:57:40 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:36.468 06:57:40 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:36.468 06:57:40 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:36.468 06:57:40 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:36.468 06:57:40 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:36.468 06:57:40 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:36.468 06:57:40 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:36.468 06:57:40 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:36.468 06:57:40 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:36.468 06:57:40 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:36.468 06:57:40 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:36.468 06:57:40 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:36.468 06:57:40 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:36.468 06:57:40 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:36.468 06:57:40 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:36.468 06:57:40 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:36.468 06:57:40 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:36.468 06:57:40 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:36.468 06:57:40 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:36.468 06:57:40 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:36.468 Cannot find device "nvmf_tgt_br" 00:18:36.468 06:57:40 -- nvmf/common.sh@154 -- # true 00:18:36.468 06:57:40 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:36.468 Cannot find device "nvmf_tgt_br2" 00:18:36.468 06:57:40 -- nvmf/common.sh@155 -- # true 00:18:36.468 06:57:40 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:36.468 06:57:40 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:36.468 Cannot find device "nvmf_tgt_br" 00:18:36.468 06:57:40 -- nvmf/common.sh@157 -- # true 00:18:36.468 06:57:40 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:36.468 Cannot find device "nvmf_tgt_br2" 00:18:36.468 06:57:40 -- nvmf/common.sh@158 -- # true 00:18:36.468 06:57:40 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:36.727 06:57:40 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:36.727 06:57:41 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:36.728 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:36.728 06:57:41 -- nvmf/common.sh@161 -- # true 00:18:36.728 06:57:41 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:36.728 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:36.728 06:57:41 -- nvmf/common.sh@162 -- # true 00:18:36.728 06:57:41 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:36.728 06:57:41 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:36.728 06:57:41 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:36.728 06:57:41 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:36.728 06:57:41 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:36.728 06:57:41 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:36.728 06:57:41 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:36.728 06:57:41 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:36.728 06:57:41 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:36.728 06:57:41 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:36.728 06:57:41 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:36.728 06:57:41 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:36.728 06:57:41 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:36.728 06:57:41 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:36.728 06:57:41 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:36.728 06:57:41 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:36.728 06:57:41 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:36.728 06:57:41 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:36.728 06:57:41 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:36.728 06:57:41 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:36.728 06:57:41 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:36.728 06:57:41 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:36.728 06:57:41 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:36.728 06:57:41 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:36.728 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:36.728 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:18:36.728 00:18:36.728 --- 10.0.0.2 ping statistics --- 00:18:36.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:36.728 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:18:36.728 06:57:41 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:36.728 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:36.728 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:18:36.728 00:18:36.728 --- 10.0.0.3 ping statistics --- 00:18:36.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:36.728 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:18:36.728 06:57:41 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:36.728 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:36.728 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:18:36.728 00:18:36.728 --- 10.0.0.1 ping statistics --- 00:18:36.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:36.728 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:18:36.728 06:57:41 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:36.728 06:57:41 -- nvmf/common.sh@421 -- # return 0 00:18:36.728 06:57:41 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:36.728 06:57:41 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:36.728 06:57:41 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:36.728 06:57:41 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:36.728 06:57:41 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:36.728 06:57:41 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:36.728 06:57:41 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:36.728 06:57:41 -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:18:36.728 06:57:41 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:36.728 06:57:41 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:36.728 06:57:41 -- common/autotest_common.sh@10 -- # set +x 00:18:36.987 06:57:41 -- nvmf/common.sh@469 -- # nvmfpid=85350 00:18:36.987 06:57:41 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:36.987 06:57:41 -- nvmf/common.sh@470 -- # waitforlisten 85350 00:18:36.987 06:57:41 -- common/autotest_common.sh@829 -- # '[' -z 85350 ']' 00:18:36.987 06:57:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:36.987 06:57:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:36.987 06:57:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:36.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:36.987 06:57:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:36.987 06:57:41 -- common/autotest_common.sh@10 -- # set +x 00:18:36.987 [2024-12-13 06:57:41.288012] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:36.987 [2024-12-13 06:57:41.288087] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:36.987 [2024-12-13 06:57:41.422295] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:36.987 [2024-12-13 06:57:41.454112] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:36.987 [2024-12-13 06:57:41.454276] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:36.987 [2024-12-13 06:57:41.454288] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:36.987 [2024-12-13 06:57:41.454296] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:36.987 [2024-12-13 06:57:41.454449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:36.987 [2024-12-13 06:57:41.454476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:37.246 06:57:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:37.246 06:57:41 -- common/autotest_common.sh@862 -- # return 0 00:18:37.246 06:57:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:37.246 06:57:41 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:37.246 06:57:41 -- common/autotest_common.sh@10 -- # set +x 00:18:37.246 06:57:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:37.246 06:57:41 -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:37.247 06:57:41 -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:37.506 [2024-12-13 06:57:41.840019] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:37.506 06:57:41 -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:37.765 Malloc0 00:18:37.765 06:57:42 -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:38.023 06:57:42 -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:38.282 06:57:42 -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:38.541 [2024-12-13 06:57:42.876763] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:38.541 06:57:42 -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:18:38.541 06:57:42 -- host/timeout.sh@32 -- # bdevperf_pid=85393 00:18:38.541 06:57:42 -- host/timeout.sh@34 -- # waitforlisten 85393 /var/tmp/bdevperf.sock 00:18:38.541 06:57:42 -- common/autotest_common.sh@829 -- # '[' -z 85393 ']' 00:18:38.541 06:57:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:38.541 06:57:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:38.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:38.541 06:57:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:38.541 06:57:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:38.541 06:57:42 -- common/autotest_common.sh@10 -- # set +x 00:18:38.541 [2024-12-13 06:57:42.931442] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:38.541 [2024-12-13 06:57:42.931530] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85393 ] 00:18:38.799 [2024-12-13 06:57:43.069420] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.799 [2024-12-13 06:57:43.101847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:38.799 06:57:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:38.799 06:57:43 -- common/autotest_common.sh@862 -- # return 0 00:18:38.799 06:57:43 -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:39.056 06:57:43 -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:18:39.314 NVMe0n1 00:18:39.314 06:57:43 -- host/timeout.sh@51 -- # rpc_pid=85408 00:18:39.314 06:57:43 -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:39.314 06:57:43 -- host/timeout.sh@53 -- # sleep 1 00:18:39.314 Running I/O for 10 seconds... 00:18:40.250 06:57:44 -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:40.512 [2024-12-13 06:57:44.973357] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa609c0 is same with the state(5) to be set 00:18:40.512 [2024-12-13 06:57:44.973437] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa609c0 is same with the state(5) to be set 00:18:40.512 [2024-12-13 06:57:44.973466] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa609c0 is same with the state(5) to be set 00:18:40.512 [2024-12-13 06:57:44.973474] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa609c0 is same with the state(5) to be set 00:18:40.512 [2024-12-13 06:57:44.973482] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa609c0 is same with the state(5) to be set 00:18:40.512 [2024-12-13 06:57:44.973490] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa609c0 is same with the state(5) to be set 00:18:40.512 [2024-12-13 06:57:44.973498] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa609c0 is same with the state(5) to be set 00:18:40.512 [2024-12-13 06:57:44.973506] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa609c0 is same with the state(5) to be set 00:18:40.512 [2024-12-13 06:57:44.973562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:129600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.512 [2024-12-13 06:57:44.973593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.512 [2024-12-13 06:57:44.973615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:128928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.512 [2024-12-13 06:57:44.973625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.512 [2024-12-13 06:57:44.973637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:128968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.512 [2024-12-13 06:57:44.973646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.512 [2024-12-13 06:57:44.973672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:128976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.512 [2024-12-13 06:57:44.973696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.512 [2024-12-13 06:57:44.973707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:128992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.512 [2024-12-13 06:57:44.973716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.512 [2024-12-13 06:57:44.973726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:129008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.512 [2024-12-13 06:57:44.973735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.512 [2024-12-13 06:57:44.973746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:129024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.512 [2024-12-13 06:57:44.973754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.512 [2024-12-13 06:57:44.973765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:129032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.512 [2024-12-13 06:57:44.973774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.512 [2024-12-13 06:57:44.973784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:129040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.512 [2024-12-13 06:57:44.973793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.512 [2024-12-13 06:57:44.973805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:129632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.512 [2024-12-13 06:57:44.973813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.512 [2024-12-13 06:57:44.973824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:129640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.512 [2024-12-13 06:57:44.973833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.512 [2024-12-13 06:57:44.973843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:129648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.512 [2024-12-13 06:57:44.973852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.512 [2024-12-13 06:57:44.973862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:129656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.512 [2024-12-13 06:57:44.973877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.512 [2024-12-13 06:57:44.973888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:129664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.512 [2024-12-13 06:57:44.973897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.512 [2024-12-13 06:57:44.973907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:129672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.512 [2024-12-13 06:57:44.973916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.512 [2024-12-13 06:57:44.973927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:129680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.512 [2024-12-13 06:57:44.973936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.512 [2024-12-13 06:57:44.973946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:129688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.512 [2024-12-13 06:57:44.973957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.512 [2024-12-13 06:57:44.973968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:129696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.512 [2024-12-13 06:57:44.973977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.512 [2024-12-13 06:57:44.973987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:129704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.512 [2024-12-13 06:57:44.973996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.512 [2024-12-13 06:57:44.974007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:129712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.513 [2024-12-13 06:57:44.974016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.513 [2024-12-13 06:57:44.974026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:129720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.513 [2024-12-13 06:57:44.974035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.513 [2024-12-13 06:57:44.974045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:129728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.513 [2024-12-13 06:57:44.974054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.513 [2024-12-13 06:57:44.974065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:129736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.513 [2024-12-13 06:57:44.974073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.513 [2024-12-13 06:57:44.974084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:129744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.513 [2024-12-13 06:57:44.974093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.513 [2024-12-13 06:57:44.974103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:129752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.513 [2024-12-13 06:57:44.974112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.513 [2024-12-13 06:57:44.974123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:129760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.513 [2024-12-13 06:57:44.974132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.513 [2024-12-13 06:57:44.974142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:129768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.513 [2024-12-13 06:57:44.974152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.513 [2024-12-13 06:57:44.974163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:129776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.513 [2024-12-13 06:57:44.974172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.513 [2024-12-13 06:57:44.974183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:129080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.513 [2024-12-13 06:57:44.974191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.513 [2024-12-13 06:57:44.974202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:129096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.513 [2024-12-13 06:57:44.974211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.513 [2024-12-13 06:57:44.974221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:129112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.513 [2024-12-13 06:57:44.974230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.513 [2024-12-13 06:57:44.974241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:129120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.513 [2024-12-13 06:57:44.974250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.513 [2024-12-13 06:57:44.974261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:129128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.513 [2024-12-13 06:57:44.974270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.513 [2024-12-13 06:57:44.974281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:129136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.513 [2024-12-13 06:57:44.974289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.513 [2024-12-13 06:57:44.974316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:129168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.513 [2024-12-13 06:57:44.974326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.513 [2024-12-13 06:57:44.974337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:129176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.513 [2024-12-13 06:57:44.974346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.513 [2024-12-13 06:57:44.974357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:129784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.513 [2024-12-13 06:57:44.974366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.513 [2024-12-13 06:57:44.974377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:129792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.513 [2024-12-13 06:57:44.974387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.513 [2024-12-13 06:57:44.974412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:129800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.513 [2024-12-13 06:57:44.974423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.513 [2024-12-13 06:57:44.974434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:129808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.513 [2024-12-13 06:57:44.974443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.513 [2024-12-13 06:57:44.974454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:129816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.513 [2024-12-13 06:57:44.974463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.513 [2024-12-13 06:57:44.974474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:129824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.513 [2024-12-13 06:57:44.974484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.513 [2024-12-13 06:57:44.974495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:129832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.513 [2024-12-13 06:57:44.974504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.513 [2024-12-13 06:57:44.974516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:129840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.513 [2024-12-13 06:57:44.974525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.513 [2024-12-13 06:57:44.974536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:129848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.513 [2024-12-13 06:57:44.974544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.513 [2024-12-13 06:57:44.974556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:129856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.513 [2024-12-13 06:57:44.974565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.513 [2024-12-13 06:57:44.974576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:129864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.513 [2024-12-13 06:57:44.974585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.513 [2024-12-13 06:57:44.974596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:129872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.513 [2024-12-13 06:57:44.974605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.513 [2024-12-13 06:57:44.974616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:129880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.513 [2024-12-13 06:57:44.974625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.513 [2024-12-13 06:57:44.974636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:129184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.513 [2024-12-13 06:57:44.974645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.513 [2024-12-13 06:57:44.974656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:129192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.513 [2024-12-13 06:57:44.974665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.513 [2024-12-13 06:57:44.974676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:129200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.513 [2024-12-13 06:57:44.974685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.513 [2024-12-13 06:57:44.974696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:129224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.513 [2024-12-13 06:57:44.974706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.513 [2024-12-13 06:57:44.974717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:129240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.513 [2024-12-13 06:57:44.974740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.513 [2024-12-13 06:57:44.974751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:129248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.513 [2024-12-13 06:57:44.974760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.513 [2024-12-13 06:57:44.974770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:129264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.513 [2024-12-13 06:57:44.974779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.513 [2024-12-13 06:57:44.974789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:129280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.513 [2024-12-13 06:57:44.974798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.513 [2024-12-13 06:57:44.974809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:129888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.513 [2024-12-13 06:57:44.974817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.514 [2024-12-13 06:57:44.974828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:129896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.514 [2024-12-13 06:57:44.974837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.514 [2024-12-13 06:57:44.974848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:129904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.514 [2024-12-13 06:57:44.974856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.514 [2024-12-13 06:57:44.974867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:129912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.514 [2024-12-13 06:57:44.974876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.514 [2024-12-13 06:57:44.974886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:129920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.514 [2024-12-13 06:57:44.974895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.514 [2024-12-13 06:57:44.974905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:129928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.514 [2024-12-13 06:57:44.974914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.514 [2024-12-13 06:57:44.974925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:129936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.514 [2024-12-13 06:57:44.974934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.514 [2024-12-13 06:57:44.974945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:129944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.514 [2024-12-13 06:57:44.974954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.514 [2024-12-13 06:57:44.974965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:129952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.514 [2024-12-13 06:57:44.974973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.514 [2024-12-13 06:57:44.974984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:129960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.514 [2024-12-13 06:57:44.974993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.514 [2024-12-13 06:57:44.975004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:129968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.514 [2024-12-13 06:57:44.975012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.514 [2024-12-13 06:57:44.975023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:129976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.514 [2024-12-13 06:57:44.975032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.514 [2024-12-13 06:57:44.975043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:129984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.514 [2024-12-13 06:57:44.975052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.514 [2024-12-13 06:57:44.975063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:129992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.514 [2024-12-13 06:57:44.975072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.514 [2024-12-13 06:57:44.975082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:130000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.514 [2024-12-13 06:57:44.975091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.514 [2024-12-13 06:57:44.975102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:129304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.514 [2024-12-13 06:57:44.975110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.514 [2024-12-13 06:57:44.975121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:129320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.514 [2024-12-13 06:57:44.975130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.514 [2024-12-13 06:57:44.975141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:129336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.514 [2024-12-13 06:57:44.975149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.514 [2024-12-13 06:57:44.975160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:129368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.514 [2024-12-13 06:57:44.975168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.514 [2024-12-13 06:57:44.975179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:129376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.514 [2024-12-13 06:57:44.975188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.514 [2024-12-13 06:57:44.975198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:129384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.514 [2024-12-13 06:57:44.975207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.514 [2024-12-13 06:57:44.975218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:129392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.514 [2024-12-13 06:57:44.975226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.514 [2024-12-13 06:57:44.975238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:129400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.514 [2024-12-13 06:57:44.975247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.514 [2024-12-13 06:57:44.975257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:130008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.514 [2024-12-13 06:57:44.975266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.514 [2024-12-13 06:57:44.975277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:130016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.514 [2024-12-13 06:57:44.975308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.514 [2024-12-13 06:57:44.975320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:130024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.514 [2024-12-13 06:57:44.975330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.514 [2024-12-13 06:57:44.975341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:130032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.514 [2024-12-13 06:57:44.975350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.514 [2024-12-13 06:57:44.975361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:130040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.514 [2024-12-13 06:57:44.975370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.514 [2024-12-13 06:57:44.975391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:130048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.514 [2024-12-13 06:57:44.975405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.514 [2024-12-13 06:57:44.975416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:130056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.514 [2024-12-13 06:57:44.975426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.514 [2024-12-13 06:57:44.975437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:130064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.514 [2024-12-13 06:57:44.975446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.514 [2024-12-13 06:57:44.975456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:130072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.514 [2024-12-13 06:57:44.975465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.514 [2024-12-13 06:57:44.975476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:130080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.514 [2024-12-13 06:57:44.975485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.514 [2024-12-13 06:57:44.975496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:130088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.514 [2024-12-13 06:57:44.975505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.514 [2024-12-13 06:57:44.975516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:130096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.514 [2024-12-13 06:57:44.975526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.514 [2024-12-13 06:57:44.975537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:130104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.514 [2024-12-13 06:57:44.975546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.514 [2024-12-13 06:57:44.975557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:130112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.514 [2024-12-13 06:57:44.975566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.514 [2024-12-13 06:57:44.975577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:130120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.514 [2024-12-13 06:57:44.975586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.514 [2024-12-13 06:57:44.975597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:130128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.514 [2024-12-13 06:57:44.975606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.515 [2024-12-13 06:57:44.975617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:129432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.515 [2024-12-13 06:57:44.975625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.515 [2024-12-13 06:57:44.975637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:129448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.515 [2024-12-13 06:57:44.975647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.515 [2024-12-13 06:57:44.975659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:129456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.515 [2024-12-13 06:57:44.975667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.515 [2024-12-13 06:57:44.975693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:129488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.515 [2024-12-13 06:57:44.975701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.515 [2024-12-13 06:57:44.975712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:129504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.515 [2024-12-13 06:57:44.975721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.515 [2024-12-13 06:57:44.975731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:129520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.515 [2024-12-13 06:57:44.975742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.515 [2024-12-13 06:57:44.975753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:129528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.515 [2024-12-13 06:57:44.975761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.515 [2024-12-13 06:57:44.975772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:129552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.515 [2024-12-13 06:57:44.975781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.515 [2024-12-13 06:57:44.975791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:130136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.515 [2024-12-13 06:57:44.975800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.515 [2024-12-13 06:57:44.975811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:130144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.515 [2024-12-13 06:57:44.975819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.515 [2024-12-13 06:57:44.975830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:130152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.515 [2024-12-13 06:57:44.975839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.515 [2024-12-13 06:57:44.975849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:130160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.515 [2024-12-13 06:57:44.975858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.515 [2024-12-13 06:57:44.975869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:130168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.515 [2024-12-13 06:57:44.975877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.515 [2024-12-13 06:57:44.975888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:130176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.515 [2024-12-13 06:57:44.975922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.515 [2024-12-13 06:57:44.975934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:130184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.515 [2024-12-13 06:57:44.975942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.515 [2024-12-13 06:57:44.975954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:130192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.515 [2024-12-13 06:57:44.975963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.515 [2024-12-13 06:57:44.975973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:130200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.515 [2024-12-13 06:57:44.975983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.515 [2024-12-13 06:57:44.975994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:130208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.515 [2024-12-13 06:57:44.976005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.515 [2024-12-13 06:57:44.976016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:130216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.515 [2024-12-13 06:57:44.976026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.515 [2024-12-13 06:57:44.976037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:130224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.515 [2024-12-13 06:57:44.976046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.515 [2024-12-13 06:57:44.976057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:130232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.515 [2024-12-13 06:57:44.976066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.515 [2024-12-13 06:57:44.976076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:130240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.515 [2024-12-13 06:57:44.976087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.515 [2024-12-13 06:57:44.976098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:130248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.515 [2024-12-13 06:57:44.976107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.515 [2024-12-13 06:57:44.976119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:130256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.515 [2024-12-13 06:57:44.976128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.515 [2024-12-13 06:57:44.976139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:129560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.515 [2024-12-13 06:57:44.976148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.515 [2024-12-13 06:57:44.976159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:129568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.515 [2024-12-13 06:57:44.976168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.515 [2024-12-13 06:57:44.976179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:129576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.515 [2024-12-13 06:57:44.976187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.515 [2024-12-13 06:57:44.976198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:129584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.515 [2024-12-13 06:57:44.976207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.515 [2024-12-13 06:57:44.976218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:129592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.515 [2024-12-13 06:57:44.976239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.515 [2024-12-13 06:57:44.976249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:129608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.515 [2024-12-13 06:57:44.976258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.515 [2024-12-13 06:57:44.976269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:129616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.515 [2024-12-13 06:57:44.976278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.515 [2024-12-13 06:57:44.976287] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfccf0 is same with the state(5) to be set 00:18:40.515 [2024-12-13 06:57:44.976315] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:40.515 [2024-12-13 06:57:44.976322] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:40.515 [2024-12-13 06:57:44.976331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:129624 len:8 PRP1 0x0 PRP2 0x0 00:18:40.515 [2024-12-13 06:57:44.976339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.515 [2024-12-13 06:57:44.976382] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xcfccf0 was disconnected and freed. reset controller. 00:18:40.515 [2024-12-13 06:57:44.976634] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:40.515 [2024-12-13 06:57:44.976771] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcacc20 (9): Bad file descriptor 00:18:40.515 [2024-12-13 06:57:44.976890] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:40.515 [2024-12-13 06:57:44.976950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:40.515 [2024-12-13 06:57:44.976992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:40.515 [2024-12-13 06:57:44.977008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcacc20 with addr=10.0.0.2, port=4420 00:18:40.515 [2024-12-13 06:57:44.977018] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcacc20 is same with the state(5) to be set 00:18:40.515 [2024-12-13 06:57:44.977039] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcacc20 (9): Bad file descriptor 00:18:40.515 [2024-12-13 06:57:44.977056] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:40.515 [2024-12-13 06:57:44.977065] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:40.515 [2024-12-13 06:57:44.977074] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:40.515 [2024-12-13 06:57:44.977094] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:40.515 [2024-12-13 06:57:44.977104] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:40.516 06:57:44 -- host/timeout.sh@56 -- # sleep 2 00:18:43.047 [2024-12-13 06:57:46.977219] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:43.047 [2024-12-13 06:57:46.977330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:43.047 [2024-12-13 06:57:46.977423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:43.047 [2024-12-13 06:57:46.977444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcacc20 with addr=10.0.0.2, port=4420 00:18:43.047 [2024-12-13 06:57:46.977457] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcacc20 is same with the state(5) to be set 00:18:43.047 [2024-12-13 06:57:46.977483] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcacc20 (9): Bad file descriptor 00:18:43.047 [2024-12-13 06:57:46.977501] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:43.047 [2024-12-13 06:57:46.977526] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:43.047 [2024-12-13 06:57:46.977536] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:43.047 [2024-12-13 06:57:46.977562] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:43.047 [2024-12-13 06:57:46.977573] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:43.047 06:57:47 -- host/timeout.sh@57 -- # get_controller 00:18:43.047 06:57:47 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:43.047 06:57:47 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:18:43.047 06:57:47 -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:18:43.047 06:57:47 -- host/timeout.sh@58 -- # get_bdev 00:18:43.047 06:57:47 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:18:43.047 06:57:47 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:18:43.047 06:57:47 -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:18:43.047 06:57:47 -- host/timeout.sh@61 -- # sleep 5 00:18:44.948 [2024-12-13 06:57:48.977691] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:44.948 [2024-12-13 06:57:48.977786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:44.948 [2024-12-13 06:57:48.977825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:44.948 [2024-12-13 06:57:48.977840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcacc20 with addr=10.0.0.2, port=4420 00:18:44.948 [2024-12-13 06:57:48.977852] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcacc20 is same with the state(5) to be set 00:18:44.948 [2024-12-13 06:57:48.977876] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcacc20 (9): Bad file descriptor 00:18:44.948 [2024-12-13 06:57:48.977893] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:44.948 [2024-12-13 06:57:48.977902] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:44.948 [2024-12-13 06:57:48.977911] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:44.948 [2024-12-13 06:57:48.977937] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:44.948 [2024-12-13 06:57:48.977947] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:46.848 [2024-12-13 06:57:50.977970] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:46.848 [2024-12-13 06:57:50.978020] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:46.848 [2024-12-13 06:57:50.978046] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:46.848 [2024-12-13 06:57:50.978056] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:18:46.848 [2024-12-13 06:57:50.978081] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:47.836 00:18:47.836 Latency(us) 00:18:47.836 [2024-12-13T06:57:52.355Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:47.836 [2024-12-13T06:57:52.355Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:47.836 Verification LBA range: start 0x0 length 0x4000 00:18:47.837 NVMe0n1 : 8.17 1978.04 7.73 15.67 0.00 64119.79 3038.49 7015926.69 00:18:47.837 [2024-12-13T06:57:52.356Z] =================================================================================================================== 00:18:47.837 [2024-12-13T06:57:52.356Z] Total : 1978.04 7.73 15.67 0.00 64119.79 3038.49 7015926.69 00:18:47.837 0 00:18:48.095 06:57:52 -- host/timeout.sh@62 -- # get_controller 00:18:48.095 06:57:52 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:48.095 06:57:52 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:18:48.354 06:57:52 -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:18:48.354 06:57:52 -- host/timeout.sh@63 -- # get_bdev 00:18:48.354 06:57:52 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:18:48.354 06:57:52 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:18:48.613 06:57:53 -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:18:48.613 06:57:53 -- host/timeout.sh@65 -- # wait 85408 00:18:48.613 06:57:53 -- host/timeout.sh@67 -- # killprocess 85393 00:18:48.613 06:57:53 -- common/autotest_common.sh@936 -- # '[' -z 85393 ']' 00:18:48.613 06:57:53 -- common/autotest_common.sh@940 -- # kill -0 85393 00:18:48.613 06:57:53 -- common/autotest_common.sh@941 -- # uname 00:18:48.613 06:57:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:48.613 06:57:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85393 00:18:48.872 killing process with pid 85393 00:18:48.872 Received shutdown signal, test time was about 9.331992 seconds 00:18:48.872 00:18:48.872 Latency(us) 00:18:48.872 [2024-12-13T06:57:53.391Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:48.872 [2024-12-13T06:57:53.391Z] =================================================================================================================== 00:18:48.872 [2024-12-13T06:57:53.391Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:48.872 06:57:53 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:48.872 06:57:53 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:48.872 06:57:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85393' 00:18:48.872 06:57:53 -- common/autotest_common.sh@955 -- # kill 85393 00:18:48.872 06:57:53 -- common/autotest_common.sh@960 -- # wait 85393 00:18:48.872 06:57:53 -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:49.131 [2024-12-13 06:57:53.482795] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:49.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:49.131 06:57:53 -- host/timeout.sh@74 -- # bdevperf_pid=85526 00:18:49.131 06:57:53 -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:18:49.131 06:57:53 -- host/timeout.sh@76 -- # waitforlisten 85526 /var/tmp/bdevperf.sock 00:18:49.131 06:57:53 -- common/autotest_common.sh@829 -- # '[' -z 85526 ']' 00:18:49.131 06:57:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:49.131 06:57:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:49.131 06:57:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:49.131 06:57:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:49.131 06:57:53 -- common/autotest_common.sh@10 -- # set +x 00:18:49.131 [2024-12-13 06:57:53.544820] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:49.131 [2024-12-13 06:57:53.545091] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85526 ] 00:18:49.389 [2024-12-13 06:57:53.680487] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.389 [2024-12-13 06:57:53.712609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:50.324 06:57:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:50.324 06:57:54 -- common/autotest_common.sh@862 -- # return 0 00:18:50.324 06:57:54 -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:50.324 06:57:54 -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:18:50.583 NVMe0n1 00:18:50.583 06:57:55 -- host/timeout.sh@84 -- # rpc_pid=85550 00:18:50.583 06:57:55 -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:50.583 06:57:55 -- host/timeout.sh@86 -- # sleep 1 00:18:50.840 Running I/O for 10 seconds... 00:18:51.773 06:57:56 -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:52.033 [2024-12-13 06:57:56.340263] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa60520 is same with the state(5) to be set 00:18:52.033 [2024-12-13 06:57:56.340581] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa60520 is same with the state(5) to be set 00:18:52.033 [2024-12-13 06:57:56.340614] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa60520 is same with the state(5) to be set 00:18:52.033 [2024-12-13 06:57:56.340624] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa60520 is same with the state(5) to be set 00:18:52.033 [2024-12-13 06:57:56.340632] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa60520 is same with the state(5) to be set 00:18:52.033 [2024-12-13 06:57:56.340640] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa60520 is same with the state(5) to be set 00:18:52.034 [2024-12-13 06:57:56.340647] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa60520 is same with the state(5) to be set 00:18:52.034 [2024-12-13 06:57:56.340655] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa60520 is same with the state(5) to be set 00:18:52.034 [2024-12-13 06:57:56.340663] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa60520 is same with the state(5) to be set 00:18:52.034 [2024-12-13 06:57:56.340671] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa60520 is same with the state(5) to be set 00:18:52.034 [2024-12-13 06:57:56.340679] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa60520 is same with the state(5) to be set 00:18:52.034 [2024-12-13 06:57:56.340686] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa60520 is same with the state(5) to be set 00:18:52.034 [2024-12-13 06:57:56.340694] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa60520 is same with the state(5) to be set 00:18:52.034 [2024-12-13 06:57:56.340879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:126096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.034 [2024-12-13 06:57:56.340925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.034 [2024-12-13 06:57:56.340946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:125400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.034 [2024-12-13 06:57:56.340956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.034 [2024-12-13 06:57:56.340966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:125408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.034 [2024-12-13 06:57:56.340976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.034 [2024-12-13 06:57:56.340986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:125416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.034 [2024-12-13 06:57:56.340994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.034 [2024-12-13 06:57:56.341004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:125432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.034 [2024-12-13 06:57:56.341013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.034 [2024-12-13 06:57:56.341023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:125464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.034 [2024-12-13 06:57:56.341032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.034 [2024-12-13 06:57:56.341042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:125472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.034 [2024-12-13 06:57:56.341050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.034 [2024-12-13 06:57:56.341062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:125504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.034 [2024-12-13 06:57:56.341071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.034 [2024-12-13 06:57:56.341096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:125520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.034 [2024-12-13 06:57:56.341119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.034 [2024-12-13 06:57:56.341128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:126112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.034 [2024-12-13 06:57:56.341136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.034 [2024-12-13 06:57:56.341146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:126136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.034 [2024-12-13 06:57:56.341154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.034 [2024-12-13 06:57:56.341164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:126152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.034 [2024-12-13 06:57:56.341171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.034 [2024-12-13 06:57:56.341181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:126160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.034 [2024-12-13 06:57:56.341189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.034 [2024-12-13 06:57:56.341198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:126168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.034 [2024-12-13 06:57:56.341206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.034 [2024-12-13 06:57:56.341216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:126192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.034 [2024-12-13 06:57:56.341224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.034 [2024-12-13 06:57:56.341233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:126200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.034 [2024-12-13 06:57:56.341241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.034 [2024-12-13 06:57:56.341250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:126208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.034 [2024-12-13 06:57:56.341261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.034 [2024-12-13 06:57:56.341271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:126216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.034 [2024-12-13 06:57:56.341279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.034 [2024-12-13 06:57:56.341288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:126224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.034 [2024-12-13 06:57:56.341296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.034 [2024-12-13 06:57:56.341306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:126232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.034 [2024-12-13 06:57:56.341314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.034 [2024-12-13 06:57:56.341324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:126240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.034 [2024-12-13 06:57:56.341332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.034 [2024-12-13 06:57:56.341341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:126248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.034 [2024-12-13 06:57:56.341349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.034 [2024-12-13 06:57:56.341359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:126256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.034 [2024-12-13 06:57:56.341367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.034 [2024-12-13 06:57:56.341377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:126264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.034 [2024-12-13 06:57:56.341384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.034 [2024-12-13 06:57:56.341394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:125536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.034 [2024-12-13 06:57:56.341402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.034 [2024-12-13 06:57:56.341412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:125544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.034 [2024-12-13 06:57:56.341420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.034 [2024-12-13 06:57:56.341430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:125584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.034 [2024-12-13 06:57:56.341437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.034 [2024-12-13 06:57:56.341447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:125592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.034 [2024-12-13 06:57:56.341469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.034 [2024-12-13 06:57:56.341481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:125600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.034 [2024-12-13 06:57:56.341489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.034 [2024-12-13 06:57:56.341498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:125616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.034 [2024-12-13 06:57:56.341506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.034 [2024-12-13 06:57:56.341516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:125624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.034 [2024-12-13 06:57:56.341524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.034 [2024-12-13 06:57:56.341533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:125632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.034 [2024-12-13 06:57:56.341541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.034 [2024-12-13 06:57:56.341552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:126272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.034 [2024-12-13 06:57:56.341561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.034 [2024-12-13 06:57:56.341571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:126280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.034 [2024-12-13 06:57:56.341579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.034 [2024-12-13 06:57:56.341588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:126288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.034 [2024-12-13 06:57:56.341596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.035 [2024-12-13 06:57:56.341605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:126296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.035 [2024-12-13 06:57:56.341613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.035 [2024-12-13 06:57:56.341623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:126304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.035 [2024-12-13 06:57:56.341631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.035 [2024-12-13 06:57:56.341640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:126312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.035 [2024-12-13 06:57:56.341648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.035 [2024-12-13 06:57:56.341657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:126320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.035 [2024-12-13 06:57:56.341665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.035 [2024-12-13 06:57:56.341675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:126328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.035 [2024-12-13 06:57:56.341683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.035 [2024-12-13 06:57:56.341692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:126336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.035 [2024-12-13 06:57:56.341700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.035 [2024-12-13 06:57:56.341710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:125640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.035 [2024-12-13 06:57:56.341718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.035 [2024-12-13 06:57:56.341727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:125656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.035 [2024-12-13 06:57:56.341735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.035 [2024-12-13 06:57:56.341744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:125672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.035 [2024-12-13 06:57:56.341752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.035 [2024-12-13 06:57:56.341761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:125680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.035 [2024-12-13 06:57:56.341770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.035 [2024-12-13 06:57:56.341780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:125688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.035 [2024-12-13 06:57:56.341787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.035 [2024-12-13 06:57:56.341797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:125712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.035 [2024-12-13 06:57:56.341805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.035 [2024-12-13 06:57:56.341815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:125720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.035 [2024-12-13 06:57:56.341823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.035 [2024-12-13 06:57:56.341832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:125728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.035 [2024-12-13 06:57:56.341841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.035 [2024-12-13 06:57:56.341850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:126344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.035 [2024-12-13 06:57:56.341858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.035 [2024-12-13 06:57:56.341868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:126352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.035 [2024-12-13 06:57:56.341875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.035 [2024-12-13 06:57:56.341885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:126360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.035 [2024-12-13 06:57:56.341893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.035 [2024-12-13 06:57:56.341903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:126368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.035 [2024-12-13 06:57:56.341911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.035 [2024-12-13 06:57:56.341920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:126376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.035 [2024-12-13 06:57:56.341928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.035 [2024-12-13 06:57:56.341937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:126384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.035 [2024-12-13 06:57:56.341945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.035 [2024-12-13 06:57:56.341955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:126392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.035 [2024-12-13 06:57:56.341962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.035 [2024-12-13 06:57:56.341972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:126400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.035 [2024-12-13 06:57:56.341980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.035 [2024-12-13 06:57:56.341989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:126408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.035 [2024-12-13 06:57:56.341997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.035 [2024-12-13 06:57:56.342007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:126416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.035 [2024-12-13 06:57:56.342014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.035 [2024-12-13 06:57:56.342024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:126424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.035 [2024-12-13 06:57:56.342032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.035 [2024-12-13 06:57:56.342041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:126432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.035 [2024-12-13 06:57:56.342049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.035 [2024-12-13 06:57:56.342059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:126440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.035 [2024-12-13 06:57:56.342066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.035 [2024-12-13 06:57:56.342077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:125744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.035 [2024-12-13 06:57:56.342085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.035 [2024-12-13 06:57:56.342095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:125768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.035 [2024-12-13 06:57:56.342102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.035 [2024-12-13 06:57:56.342112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:125784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.035 [2024-12-13 06:57:56.342120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.035 [2024-12-13 06:57:56.342129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:125792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.035 [2024-12-13 06:57:56.342137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.035 [2024-12-13 06:57:56.342147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:125800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.035 [2024-12-13 06:57:56.342155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.035 [2024-12-13 06:57:56.342164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:125808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.035 [2024-12-13 06:57:56.342172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.035 [2024-12-13 06:57:56.342181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:125816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.035 [2024-12-13 06:57:56.342189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.035 [2024-12-13 06:57:56.342199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:125832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.036 [2024-12-13 06:57:56.342208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.036 [2024-12-13 06:57:56.342217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:126448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.036 [2024-12-13 06:57:56.342225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.036 [2024-12-13 06:57:56.342234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:126456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.036 [2024-12-13 06:57:56.342242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.036 [2024-12-13 06:57:56.342253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:126464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.036 [2024-12-13 06:57:56.342261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.036 [2024-12-13 06:57:56.342271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:126472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.036 [2024-12-13 06:57:56.342278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.036 [2024-12-13 06:57:56.342288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:126480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.036 [2024-12-13 06:57:56.342296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.036 [2024-12-13 06:57:56.342306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:126488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.036 [2024-12-13 06:57:56.342314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.036 [2024-12-13 06:57:56.342324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:126496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.036 [2024-12-13 06:57:56.342332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.036 [2024-12-13 06:57:56.342341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:126504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.036 [2024-12-13 06:57:56.342359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.036 [2024-12-13 06:57:56.342371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:126512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.036 [2024-12-13 06:57:56.342379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.036 [2024-12-13 06:57:56.342389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:126520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.036 [2024-12-13 06:57:56.342397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.036 [2024-12-13 06:57:56.342407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:126528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.036 [2024-12-13 06:57:56.342415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.036 [2024-12-13 06:57:56.342425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:126536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.036 [2024-12-13 06:57:56.342433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.036 [2024-12-13 06:57:56.342443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:125840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.036 [2024-12-13 06:57:56.342451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.036 [2024-12-13 06:57:56.342461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:125848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.036 [2024-12-13 06:57:56.342468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.036 [2024-12-13 06:57:56.342478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:125904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.036 [2024-12-13 06:57:56.342486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.036 [2024-12-13 06:57:56.342499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:125920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.036 [2024-12-13 06:57:56.342507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.036 [2024-12-13 06:57:56.342517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:125928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.036 [2024-12-13 06:57:56.342525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.036 [2024-12-13 06:57:56.342534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:125936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.036 [2024-12-13 06:57:56.342542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.036 [2024-12-13 06:57:56.342552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:125944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.036 [2024-12-13 06:57:56.342560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.036 [2024-12-13 06:57:56.342569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:125952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.036 [2024-12-13 06:57:56.342577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.036 [2024-12-13 06:57:56.342586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:126544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.036 [2024-12-13 06:57:56.342594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.036 [2024-12-13 06:57:56.342603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:126552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.036 [2024-12-13 06:57:56.342611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.036 [2024-12-13 06:57:56.342620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:126560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.036 [2024-12-13 06:57:56.342628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.036 [2024-12-13 06:57:56.342638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:126568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.036 [2024-12-13 06:57:56.342646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.036 [2024-12-13 06:57:56.342655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:126576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.036 [2024-12-13 06:57:56.342663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.036 [2024-12-13 06:57:56.342673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:126584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.036 [2024-12-13 06:57:56.342681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.036 [2024-12-13 06:57:56.342691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:126592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.036 [2024-12-13 06:57:56.342699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.036 [2024-12-13 06:57:56.342708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:126600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.036 [2024-12-13 06:57:56.342716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.037 [2024-12-13 06:57:56.342726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:125968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.037 [2024-12-13 06:57:56.342734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.037 [2024-12-13 06:57:56.342744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:125992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.037 [2024-12-13 06:57:56.342751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.037 [2024-12-13 06:57:56.342761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:126000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.037 [2024-12-13 06:57:56.342769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.037 [2024-12-13 06:57:56.342780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:126016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.037 [2024-12-13 06:57:56.342788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.037 [2024-12-13 06:57:56.342798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:126024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.037 [2024-12-13 06:57:56.342806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.037 [2024-12-13 06:57:56.342815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:126032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.037 [2024-12-13 06:57:56.342823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.037 [2024-12-13 06:57:56.342833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:126056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.037 [2024-12-13 06:57:56.342841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.037 [2024-12-13 06:57:56.342850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:126064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.037 [2024-12-13 06:57:56.342858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.037 [2024-12-13 06:57:56.342867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:126608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.037 [2024-12-13 06:57:56.342875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.037 [2024-12-13 06:57:56.342886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:126616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.037 [2024-12-13 06:57:56.342894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.037 [2024-12-13 06:57:56.342903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:126624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.037 [2024-12-13 06:57:56.342911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.037 [2024-12-13 06:57:56.342920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:126632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.037 [2024-12-13 06:57:56.342928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.037 [2024-12-13 06:57:56.342938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:126640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.037 [2024-12-13 06:57:56.342946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.037 [2024-12-13 06:57:56.342955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:126648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.037 [2024-12-13 06:57:56.342964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.037 [2024-12-13 06:57:56.342973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:126656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.037 [2024-12-13 06:57:56.342981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.037 [2024-12-13 06:57:56.342991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:126664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.037 [2024-12-13 06:57:56.342998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.037 [2024-12-13 06:57:56.343008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:126672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.037 [2024-12-13 06:57:56.343016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.037 [2024-12-13 06:57:56.343025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:126680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.037 [2024-12-13 06:57:56.343034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.037 [2024-12-13 06:57:56.343043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:126688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.037 [2024-12-13 06:57:56.343051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.037 [2024-12-13 06:57:56.343062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:126696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.037 [2024-12-13 06:57:56.343071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.037 [2024-12-13 06:57:56.343080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:126704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.037 [2024-12-13 06:57:56.343088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.037 [2024-12-13 06:57:56.343098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:126712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.037 [2024-12-13 06:57:56.343106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.037 [2024-12-13 06:57:56.343116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:126080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.037 [2024-12-13 06:57:56.343124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.037 [2024-12-13 06:57:56.343133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:126088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.037 [2024-12-13 06:57:56.343141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.037 [2024-12-13 06:57:56.343151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:126104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.037 [2024-12-13 06:57:56.343159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.037 [2024-12-13 06:57:56.343168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:126120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.037 [2024-12-13 06:57:56.343176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.037 [2024-12-13 06:57:56.343185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:126128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.037 [2024-12-13 06:57:56.343193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.037 [2024-12-13 06:57:56.343203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:126144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.037 [2024-12-13 06:57:56.343210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.037 [2024-12-13 06:57:56.343220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:126176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.037 [2024-12-13 06:57:56.343228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.037 [2024-12-13 06:57:56.343240] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c3cf0 is same with the state(5) to be set 00:18:52.037 [2024-12-13 06:57:56.343251] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:52.037 [2024-12-13 06:57:56.343258] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:52.037 [2024-12-13 06:57:56.343265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:126184 len:8 PRP1 0x0 PRP2 0x0 00:18:52.037 [2024-12-13 06:57:56.343273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.037 [2024-12-13 06:57:56.343310] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x21c3cf0 was disconnected and freed. reset controller. 00:18:52.037 [2024-12-13 06:57:56.343536] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:52.037 [2024-12-13 06:57:56.343606] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2173c20 (9): Bad file descriptor 00:18:52.037 [2024-12-13 06:57:56.343700] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:52.037 [2024-12-13 06:57:56.343755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:52.037 [2024-12-13 06:57:56.343792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:52.037 [2024-12-13 06:57:56.343807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2173c20 with addr=10.0.0.2, port=4420 00:18:52.037 [2024-12-13 06:57:56.343819] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2173c20 is same with the state(5) to be set 00:18:52.037 [2024-12-13 06:57:56.343837] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2173c20 (9): Bad file descriptor 00:18:52.037 [2024-12-13 06:57:56.343851] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:52.037 [2024-12-13 06:57:56.343859] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:52.037 [2024-12-13 06:57:56.343868] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:52.037 [2024-12-13 06:57:56.343886] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:52.037 [2024-12-13 06:57:56.343895] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:52.037 06:57:56 -- host/timeout.sh@90 -- # sleep 1 00:18:52.974 [2024-12-13 06:57:57.344056] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:52.974 [2024-12-13 06:57:57.344433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:52.974 [2024-12-13 06:57:57.344489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:52.974 [2024-12-13 06:57:57.344506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2173c20 with addr=10.0.0.2, port=4420 00:18:52.974 [2024-12-13 06:57:57.344521] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2173c20 is same with the state(5) to be set 00:18:52.974 [2024-12-13 06:57:57.344556] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2173c20 (9): Bad file descriptor 00:18:52.974 [2024-12-13 06:57:57.344576] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:52.974 [2024-12-13 06:57:57.344585] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:52.974 [2024-12-13 06:57:57.344596] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:52.974 [2024-12-13 06:57:57.344623] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:52.974 [2024-12-13 06:57:57.344634] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:52.974 06:57:57 -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:53.233 [2024-12-13 06:57:57.610175] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:53.233 06:57:57 -- host/timeout.sh@92 -- # wait 85550 00:18:54.168 [2024-12-13 06:57:58.356763] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:00.730 00:19:00.730 Latency(us) 00:19:00.730 [2024-12-13T06:58:05.249Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:00.730 [2024-12-13T06:58:05.249Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:00.730 Verification LBA range: start 0x0 length 0x4000 00:19:00.730 NVMe0n1 : 10.01 9801.80 38.29 0.00 0.00 13036.27 770.79 3019898.88 00:19:00.730 [2024-12-13T06:58:05.249Z] =================================================================================================================== 00:19:00.731 [2024-12-13T06:58:05.250Z] Total : 9801.80 38.29 0.00 0.00 13036.27 770.79 3019898.88 00:19:00.731 0 00:19:00.731 06:58:05 -- host/timeout.sh@97 -- # rpc_pid=85660 00:19:00.731 06:58:05 -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:00.731 06:58:05 -- host/timeout.sh@98 -- # sleep 1 00:19:00.990 Running I/O for 10 seconds... 00:19:01.926 06:58:06 -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:02.186 [2024-12-13 06:58:06.476918] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66f10 is same with the state(5) to be set 00:19:02.186 [2024-12-13 06:58:06.476972] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66f10 is same with the state(5) to be set 00:19:02.186 [2024-12-13 06:58:06.477001] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66f10 is same with the state(5) to be set 00:19:02.186 [2024-12-13 06:58:06.477008] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66f10 is same with the state(5) to be set 00:19:02.186 [2024-12-13 06:58:06.477015] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66f10 is same with the state(5) to be set 00:19:02.186 [2024-12-13 06:58:06.477022] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66f10 is same with the state(5) to be set 00:19:02.186 [2024-12-13 06:58:06.477030] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66f10 is same with the state(5) to be set 00:19:02.186 [2024-12-13 06:58:06.477037] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66f10 is same with the state(5) to be set 00:19:02.186 [2024-12-13 06:58:06.477044] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66f10 is same with the state(5) to be set 00:19:02.186 [2024-12-13 06:58:06.477051] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66f10 is same with the state(5) to be set 00:19:02.186 [2024-12-13 06:58:06.477058] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66f10 is same with the state(5) to be set 00:19:02.186 [2024-12-13 06:58:06.477065] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66f10 is same with the state(5) to be set 00:19:02.186 [2024-12-13 06:58:06.477119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:129088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.186 [2024-12-13 06:58:06.477148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.186 [2024-12-13 06:58:06.477168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:128432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.186 [2024-12-13 06:58:06.477177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.186 [2024-12-13 06:58:06.477188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:128440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.186 [2024-12-13 06:58:06.477196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.186 [2024-12-13 06:58:06.477205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:128472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.186 [2024-12-13 06:58:06.477214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.186 [2024-12-13 06:58:06.477223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:128480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.186 [2024-12-13 06:58:06.477231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.186 [2024-12-13 06:58:06.477241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:128488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.186 [2024-12-13 06:58:06.477248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.186 [2024-12-13 06:58:06.477258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.186 [2024-12-13 06:58:06.477266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.186 [2024-12-13 06:58:06.477276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:128504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.186 [2024-12-13 06:58:06.477291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.186 [2024-12-13 06:58:06.477301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:128512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.186 [2024-12-13 06:58:06.477309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.186 [2024-12-13 06:58:06.477319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:129104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.186 [2024-12-13 06:58:06.477327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.187 [2024-12-13 06:58:06.477337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:129144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.187 [2024-12-13 06:58:06.477344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.187 [2024-12-13 06:58:06.477354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:129152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.187 [2024-12-13 06:58:06.477374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.187 [2024-12-13 06:58:06.477403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:129160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.187 [2024-12-13 06:58:06.477412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.187 [2024-12-13 06:58:06.477422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:129184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.187 [2024-12-13 06:58:06.477430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.187 [2024-12-13 06:58:06.477440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:129192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.187 [2024-12-13 06:58:06.477449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.187 [2024-12-13 06:58:06.477459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:129200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.187 [2024-12-13 06:58:06.477467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.187 [2024-12-13 06:58:06.477493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:129208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.187 [2024-12-13 06:58:06.477504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.187 [2024-12-13 06:58:06.477515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:129216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.187 [2024-12-13 06:58:06.477523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.187 [2024-12-13 06:58:06.477533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:129224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.187 [2024-12-13 06:58:06.477542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.187 [2024-12-13 06:58:06.477552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:129232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.187 [2024-12-13 06:58:06.477561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.187 [2024-12-13 06:58:06.477574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:128536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.187 [2024-12-13 06:58:06.477582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.187 [2024-12-13 06:58:06.477596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:128560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.187 [2024-12-13 06:58:06.477605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.187 [2024-12-13 06:58:06.477615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:128568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.187 [2024-12-13 06:58:06.477623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.187 [2024-12-13 06:58:06.477633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:128576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.187 [2024-12-13 06:58:06.477641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.187 [2024-12-13 06:58:06.477651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:128584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.187 [2024-12-13 06:58:06.477659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.187 [2024-12-13 06:58:06.477669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:128592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.187 [2024-12-13 06:58:06.477678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.187 [2024-12-13 06:58:06.477687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:128600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.187 [2024-12-13 06:58:06.477696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.187 [2024-12-13 06:58:06.477706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:128656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.187 [2024-12-13 06:58:06.477714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.187 [2024-12-13 06:58:06.477724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:129240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.187 [2024-12-13 06:58:06.477732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.187 [2024-12-13 06:58:06.477742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:129248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.187 [2024-12-13 06:58:06.477750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.187 [2024-12-13 06:58:06.477760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:129256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.187 [2024-12-13 06:58:06.477768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.187 [2024-12-13 06:58:06.477778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:129264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.187 [2024-12-13 06:58:06.477801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.187 [2024-12-13 06:58:06.477810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:129272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.187 [2024-12-13 06:58:06.477835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.187 [2024-12-13 06:58:06.477862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:129280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.187 [2024-12-13 06:58:06.477870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.187 [2024-12-13 06:58:06.477881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:129288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.187 [2024-12-13 06:58:06.477889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.187 [2024-12-13 06:58:06.477899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:129296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.187 [2024-12-13 06:58:06.477908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.187 [2024-12-13 06:58:06.477918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:129304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.187 [2024-12-13 06:58:06.477927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.187 [2024-12-13 06:58:06.477953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:129312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.187 [2024-12-13 06:58:06.477962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.187 [2024-12-13 06:58:06.477974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:129320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.187 [2024-12-13 06:58:06.477983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.187 [2024-12-13 06:58:06.477994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:129328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.187 [2024-12-13 06:58:06.478003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.187 [2024-12-13 06:58:06.478014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:129336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.187 [2024-12-13 06:58:06.478023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.187 [2024-12-13 06:58:06.478033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:129344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.187 [2024-12-13 06:58:06.478042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.187 [2024-12-13 06:58:06.478053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:129352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.187 [2024-12-13 06:58:06.478061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.187 [2024-12-13 06:58:06.478072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:129360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.187 [2024-12-13 06:58:06.478081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.187 [2024-12-13 06:58:06.478091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:129368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.187 [2024-12-13 06:58:06.478100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.187 [2024-12-13 06:58:06.478111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:129376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.187 [2024-12-13 06:58:06.478122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.187 [2024-12-13 06:58:06.478133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:128696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.187 [2024-12-13 06:58:06.478145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.187 [2024-12-13 06:58:06.478156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:128720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.187 [2024-12-13 06:58:06.478164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.187 [2024-12-13 06:58:06.478175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:128744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.188 [2024-12-13 06:58:06.478185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.188 [2024-12-13 06:58:06.478196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:128752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.188 [2024-12-13 06:58:06.478206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.188 [2024-12-13 06:58:06.478216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:128768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.188 [2024-12-13 06:58:06.478225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.188 [2024-12-13 06:58:06.478236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:128776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.188 [2024-12-13 06:58:06.478245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.188 [2024-12-13 06:58:06.478255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:128800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.188 [2024-12-13 06:58:06.478264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.188 [2024-12-13 06:58:06.478275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:128808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.188 [2024-12-13 06:58:06.478284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.188 [2024-12-13 06:58:06.478294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:129384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.188 [2024-12-13 06:58:06.478303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.188 [2024-12-13 06:58:06.478314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:129392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.188 [2024-12-13 06:58:06.478323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.188 [2024-12-13 06:58:06.478333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:129400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.188 [2024-12-13 06:58:06.478342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.188 [2024-12-13 06:58:06.478353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:129408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.188 [2024-12-13 06:58:06.478377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.188 [2024-12-13 06:58:06.478389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:129416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.188 [2024-12-13 06:58:06.478397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.188 [2024-12-13 06:58:06.478408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:129424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.188 [2024-12-13 06:58:06.478418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.188 [2024-12-13 06:58:06.478429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:129432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.188 [2024-12-13 06:58:06.478438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.188 [2024-12-13 06:58:06.478449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:129440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.188 [2024-12-13 06:58:06.478468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.188 [2024-12-13 06:58:06.478481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:129448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.188 [2024-12-13 06:58:06.478491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.188 [2024-12-13 06:58:06.478502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:129456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.188 [2024-12-13 06:58:06.478511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.188 [2024-12-13 06:58:06.478523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:129464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.188 [2024-12-13 06:58:06.478532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.188 [2024-12-13 06:58:06.478543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:129472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.188 [2024-12-13 06:58:06.478553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.188 [2024-12-13 06:58:06.478564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:129480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.188 [2024-12-13 06:58:06.478573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.188 [2024-12-13 06:58:06.478584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:129488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.188 [2024-12-13 06:58:06.478594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.188 [2024-12-13 06:58:06.478605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:129496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.188 [2024-12-13 06:58:06.478614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.188 [2024-12-13 06:58:06.478625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:129504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.188 [2024-12-13 06:58:06.478634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.188 [2024-12-13 06:58:06.478645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:129512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.188 [2024-12-13 06:58:06.478655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.188 [2024-12-13 06:58:06.478666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:129520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.188 [2024-12-13 06:58:06.478675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.188 [2024-12-13 06:58:06.478686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:129528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.188 [2024-12-13 06:58:06.478695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.188 [2024-12-13 06:58:06.478706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:128816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.188 [2024-12-13 06:58:06.478715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.188 [2024-12-13 06:58:06.478726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:128832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.188 [2024-12-13 06:58:06.478735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.188 [2024-12-13 06:58:06.478746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:128840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.188 [2024-12-13 06:58:06.478755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.188 [2024-12-13 06:58:06.478766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:128848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.188 [2024-12-13 06:58:06.478790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.188 [2024-12-13 06:58:06.478800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:128856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.188 [2024-12-13 06:58:06.478809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.188 [2024-12-13 06:58:06.478820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:128896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.188 [2024-12-13 06:58:06.478829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.188 [2024-12-13 06:58:06.478840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:128904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.188 [2024-12-13 06:58:06.478849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.188 [2024-12-13 06:58:06.478860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:128912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.188 [2024-12-13 06:58:06.478870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.188 [2024-12-13 06:58:06.478881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:129536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.188 [2024-12-13 06:58:06.478890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.188 [2024-12-13 06:58:06.478901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:129544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.188 [2024-12-13 06:58:06.478909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.188 [2024-12-13 06:58:06.478920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:129552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.188 [2024-12-13 06:58:06.478929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.189 [2024-12-13 06:58:06.478940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:129560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.189 [2024-12-13 06:58:06.478949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.189 [2024-12-13 06:58:06.478959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:129568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.189 [2024-12-13 06:58:06.478968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.189 [2024-12-13 06:58:06.478979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:129576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.189 [2024-12-13 06:58:06.478987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.189 [2024-12-13 06:58:06.478998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:129584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.189 [2024-12-13 06:58:06.479007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.189 [2024-12-13 06:58:06.479017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:129592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.189 [2024-12-13 06:58:06.479026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.189 [2024-12-13 06:58:06.479037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:129600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.189 [2024-12-13 06:58:06.479046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.189 [2024-12-13 06:58:06.479056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:129608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.189 [2024-12-13 06:58:06.479065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.189 [2024-12-13 06:58:06.479076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:129616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.189 [2024-12-13 06:58:06.479085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.189 [2024-12-13 06:58:06.479095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:129624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.189 [2024-12-13 06:58:06.479104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.189 [2024-12-13 06:58:06.479115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:128928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.189 [2024-12-13 06:58:06.479124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.189 [2024-12-13 06:58:06.479135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:128968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.189 [2024-12-13 06:58:06.479144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.189 [2024-12-13 06:58:06.479154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:128976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.189 [2024-12-13 06:58:06.479163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.189 [2024-12-13 06:58:06.479174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:128992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.189 [2024-12-13 06:58:06.479183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.189 [2024-12-13 06:58:06.479195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:129008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.189 [2024-12-13 06:58:06.479204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.189 [2024-12-13 06:58:06.479214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:129024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.189 [2024-12-13 06:58:06.479223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.189 [2024-12-13 06:58:06.479234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:129032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.189 [2024-12-13 06:58:06.479243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.189 [2024-12-13 06:58:06.479253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:129040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.189 [2024-12-13 06:58:06.479262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.189 [2024-12-13 06:58:06.479273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:129632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.189 [2024-12-13 06:58:06.479282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.189 [2024-12-13 06:58:06.479293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:129640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.189 [2024-12-13 06:58:06.479301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.189 [2024-12-13 06:58:06.479312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:129648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.189 [2024-12-13 06:58:06.479337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.189 [2024-12-13 06:58:06.479348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:129656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.189 [2024-12-13 06:58:06.479358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.189 [2024-12-13 06:58:06.479369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:129664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.189 [2024-12-13 06:58:06.479387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.189 [2024-12-13 06:58:06.479400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:129672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.189 [2024-12-13 06:58:06.479409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.189 [2024-12-13 06:58:06.479421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:129680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.189 [2024-12-13 06:58:06.479430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.189 [2024-12-13 06:58:06.479441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:129688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.189 [2024-12-13 06:58:06.479451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.189 [2024-12-13 06:58:06.479462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:129696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.189 [2024-12-13 06:58:06.479471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.189 [2024-12-13 06:58:06.479482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:129704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.189 [2024-12-13 06:58:06.479492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.189 [2024-12-13 06:58:06.479503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:129712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.189 [2024-12-13 06:58:06.479513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.189 [2024-12-13 06:58:06.479524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:129720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.189 [2024-12-13 06:58:06.479533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.189 [2024-12-13 06:58:06.479544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:129728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.189 [2024-12-13 06:58:06.479554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.189 [2024-12-13 06:58:06.479565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:129736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.189 [2024-12-13 06:58:06.479574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.189 [2024-12-13 06:58:06.479585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:129744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.189 [2024-12-13 06:58:06.479595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.189 [2024-12-13 06:58:06.479606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:129752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.189 [2024-12-13 06:58:06.479615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.189 [2024-12-13 06:58:06.479627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:129760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.189 [2024-12-13 06:58:06.479636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.189 [2024-12-13 06:58:06.479647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:129768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.189 [2024-12-13 06:58:06.479656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.189 [2024-12-13 06:58:06.479667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:129776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.189 [2024-12-13 06:58:06.479676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.189 [2024-12-13 06:58:06.479688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:129080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.189 [2024-12-13 06:58:06.479697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.189 [2024-12-13 06:58:06.479708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:129096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.189 [2024-12-13 06:58:06.479717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.190 [2024-12-13 06:58:06.479743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:129112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.190 [2024-12-13 06:58:06.479752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.190 [2024-12-13 06:58:06.479763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:129120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.190 [2024-12-13 06:58:06.479772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.190 [2024-12-13 06:58:06.479782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:129128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.190 [2024-12-13 06:58:06.479792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.190 [2024-12-13 06:58:06.479802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:129136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.190 [2024-12-13 06:58:06.479811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.190 [2024-12-13 06:58:06.479822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:129168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.190 [2024-12-13 06:58:06.479831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.190 [2024-12-13 06:58:06.479841] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2279e40 is same with the state(5) to be set 00:19:02.190 [2024-12-13 06:58:06.479852] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:02.190 [2024-12-13 06:58:06.479860] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:02.190 [2024-12-13 06:58:06.479869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:129176 len:8 PRP1 0x0 PRP2 0x0 00:19:02.190 [2024-12-13 06:58:06.479877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.190 [2024-12-13 06:58:06.479918] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2279e40 was disconnected and freed. reset controller. 00:19:02.190 [2024-12-13 06:58:06.480174] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:02.190 [2024-12-13 06:58:06.480246] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2173c20 (9): Bad file descriptor 00:19:02.190 [2024-12-13 06:58:06.480368] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:02.190 [2024-12-13 06:58:06.480441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:02.190 [2024-12-13 06:58:06.480482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:02.190 [2024-12-13 06:58:06.480498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2173c20 with addr=10.0.0.2, port=4420 00:19:02.190 [2024-12-13 06:58:06.480509] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2173c20 is same with the state(5) to be set 00:19:02.190 [2024-12-13 06:58:06.480527] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2173c20 (9): Bad file descriptor 00:19:02.190 [2024-12-13 06:58:06.480543] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:02.190 [2024-12-13 06:58:06.480552] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:02.190 [2024-12-13 06:58:06.480561] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:02.190 [2024-12-13 06:58:06.480581] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:02.190 [2024-12-13 06:58:06.480591] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:02.190 06:58:06 -- host/timeout.sh@101 -- # sleep 3 00:19:03.125 [2024-12-13 06:58:07.480729] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:03.125 [2024-12-13 06:58:07.480862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:03.125 [2024-12-13 06:58:07.480903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:03.125 [2024-12-13 06:58:07.480918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2173c20 with addr=10.0.0.2, port=4420 00:19:03.125 [2024-12-13 06:58:07.480931] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2173c20 is same with the state(5) to be set 00:19:03.125 [2024-12-13 06:58:07.480955] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2173c20 (9): Bad file descriptor 00:19:03.125 [2024-12-13 06:58:07.480972] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:03.125 [2024-12-13 06:58:07.480981] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:03.125 [2024-12-13 06:58:07.480990] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:03.125 [2024-12-13 06:58:07.481014] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:03.125 [2024-12-13 06:58:07.481024] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:04.060 [2024-12-13 06:58:08.481146] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:04.060 [2024-12-13 06:58:08.481255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:04.060 [2024-12-13 06:58:08.481295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:04.060 [2024-12-13 06:58:08.481309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2173c20 with addr=10.0.0.2, port=4420 00:19:04.060 [2024-12-13 06:58:08.481321] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2173c20 is same with the state(5) to be set 00:19:04.060 [2024-12-13 06:58:08.481345] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2173c20 (9): Bad file descriptor 00:19:04.060 [2024-12-13 06:58:08.481362] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:04.060 [2024-12-13 06:58:08.481405] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:04.060 [2024-12-13 06:58:08.481415] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:04.060 [2024-12-13 06:58:08.481447] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:04.060 [2024-12-13 06:58:08.481459] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:04.996 [2024-12-13 06:58:09.483056] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:04.996 [2024-12-13 06:58:09.483174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:04.996 [2024-12-13 06:58:09.483216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:04.996 [2024-12-13 06:58:09.483231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2173c20 with addr=10.0.0.2, port=4420 00:19:04.996 [2024-12-13 06:58:09.483244] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2173c20 is same with the state(5) to be set 00:19:04.996 [2024-12-13 06:58:09.483439] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2173c20 (9): Bad file descriptor 00:19:04.996 [2024-12-13 06:58:09.483664] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:04.996 [2024-12-13 06:58:09.483676] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:04.996 [2024-12-13 06:58:09.483686] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:04.996 [2024-12-13 06:58:09.486234] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:04.996 [2024-12-13 06:58:09.486264] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:04.996 06:58:09 -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:05.254 [2024-12-13 06:58:09.759127] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:05.513 06:58:09 -- host/timeout.sh@103 -- # wait 85660 00:19:06.079 [2024-12-13 06:58:10.508425] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:11.348 00:19:11.348 Latency(us) 00:19:11.348 [2024-12-13T06:58:15.867Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:11.348 [2024-12-13T06:58:15.867Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:11.348 Verification LBA range: start 0x0 length 0x4000 00:19:11.348 NVMe0n1 : 10.01 8444.10 32.98 6013.41 0.00 8839.55 927.19 3019898.88 00:19:11.348 [2024-12-13T06:58:15.867Z] =================================================================================================================== 00:19:11.348 [2024-12-13T06:58:15.867Z] Total : 8444.10 32.98 6013.41 0.00 8839.55 0.00 3019898.88 00:19:11.348 0 00:19:11.348 06:58:15 -- host/timeout.sh@105 -- # killprocess 85526 00:19:11.348 06:58:15 -- common/autotest_common.sh@936 -- # '[' -z 85526 ']' 00:19:11.348 06:58:15 -- common/autotest_common.sh@940 -- # kill -0 85526 00:19:11.348 06:58:15 -- common/autotest_common.sh@941 -- # uname 00:19:11.348 06:58:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:11.348 06:58:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85526 00:19:11.348 killing process with pid 85526 00:19:11.348 Received shutdown signal, test time was about 10.000000 seconds 00:19:11.348 00:19:11.348 Latency(us) 00:19:11.348 [2024-12-13T06:58:15.867Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:11.348 [2024-12-13T06:58:15.867Z] =================================================================================================================== 00:19:11.348 [2024-12-13T06:58:15.867Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:11.348 06:58:15 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:19:11.348 06:58:15 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:19:11.348 06:58:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85526' 00:19:11.348 06:58:15 -- common/autotest_common.sh@955 -- # kill 85526 00:19:11.348 06:58:15 -- common/autotest_common.sh@960 -- # wait 85526 00:19:11.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:11.348 06:58:15 -- host/timeout.sh@110 -- # bdevperf_pid=85774 00:19:11.348 06:58:15 -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:19:11.348 06:58:15 -- host/timeout.sh@112 -- # waitforlisten 85774 /var/tmp/bdevperf.sock 00:19:11.348 06:58:15 -- common/autotest_common.sh@829 -- # '[' -z 85774 ']' 00:19:11.348 06:58:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:11.348 06:58:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:11.348 06:58:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:11.348 06:58:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:11.348 06:58:15 -- common/autotest_common.sh@10 -- # set +x 00:19:11.348 [2024-12-13 06:58:15.593097] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:11.348 [2024-12-13 06:58:15.593475] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85774 ] 00:19:11.348 [2024-12-13 06:58:15.733057] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:11.348 [2024-12-13 06:58:15.765266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:12.283 06:58:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:12.283 06:58:16 -- common/autotest_common.sh@862 -- # return 0 00:19:12.283 06:58:16 -- host/timeout.sh@116 -- # dtrace_pid=85790 00:19:12.283 06:58:16 -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 85774 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:19:12.283 06:58:16 -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:19:12.543 06:58:16 -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:19:12.801 NVMe0n1 00:19:12.801 06:58:17 -- host/timeout.sh@124 -- # rpc_pid=85826 00:19:12.801 06:58:17 -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:12.801 06:58:17 -- host/timeout.sh@125 -- # sleep 1 00:19:12.801 Running I/O for 10 seconds... 00:19:13.735 06:58:18 -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:13.996 [2024-12-13 06:58:18.364405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:29616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.996 [2024-12-13 06:58:18.364474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.996 [2024-12-13 06:58:18.364497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:71896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.996 [2024-12-13 06:58:18.364507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.996 [2024-12-13 06:58:18.364517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:50824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.996 [2024-12-13 06:58:18.364525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.996 [2024-12-13 06:58:18.364536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:104760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.996 [2024-12-13 06:58:18.364544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.996 [2024-12-13 06:58:18.364555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:124160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.996 [2024-12-13 06:58:18.364563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.996 [2024-12-13 06:58:18.364573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:48296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.996 [2024-12-13 06:58:18.364581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.996 [2024-12-13 06:58:18.364591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:73768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.996 [2024-12-13 06:58:18.364599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.996 [2024-12-13 06:58:18.364609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:79720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.996 [2024-12-13 06:58:18.364617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.996 [2024-12-13 06:58:18.364627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:128640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.996 [2024-12-13 06:58:18.364635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.996 [2024-12-13 06:58:18.364645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:24640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.996 [2024-12-13 06:58:18.364653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.996 [2024-12-13 06:58:18.364663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:87112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.996 [2024-12-13 06:58:18.364672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.996 [2024-12-13 06:58:18.364682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:77088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.996 [2024-12-13 06:58:18.364691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.996 [2024-12-13 06:58:18.364701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:60464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.996 [2024-12-13 06:58:18.364709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.996 [2024-12-13 06:58:18.364721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:18312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.996 [2024-12-13 06:58:18.364730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.996 [2024-12-13 06:58:18.364754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:59304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.996 [2024-12-13 06:58:18.364763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.996 [2024-12-13 06:58:18.364774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:30992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.996 [2024-12-13 06:58:18.364782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.996 [2024-12-13 06:58:18.364792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:61336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.996 [2024-12-13 06:58:18.364800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.996 [2024-12-13 06:58:18.364810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:4072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.996 [2024-12-13 06:58:18.364817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.996 [2024-12-13 06:58:18.364827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:52368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.996 [2024-12-13 06:58:18.364835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.996 [2024-12-13 06:58:18.364844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:113488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.996 [2024-12-13 06:58:18.364852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.996 [2024-12-13 06:58:18.364861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:24488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.996 [2024-12-13 06:58:18.364869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.996 [2024-12-13 06:58:18.364879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:106544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.996 [2024-12-13 06:58:18.364887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.996 [2024-12-13 06:58:18.364896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:62144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.996 [2024-12-13 06:58:18.364904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.996 [2024-12-13 06:58:18.364913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:1416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.996 [2024-12-13 06:58:18.364921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.996 [2024-12-13 06:58:18.364931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:98568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.996 [2024-12-13 06:58:18.364939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.996 [2024-12-13 06:58:18.364949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:55512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.996 [2024-12-13 06:58:18.364958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.996 [2024-12-13 06:58:18.364967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:6024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.996 [2024-12-13 06:58:18.364975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.997 [2024-12-13 06:58:18.364985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:125656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.997 [2024-12-13 06:58:18.364993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.997 [2024-12-13 06:58:18.365002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:60840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.997 [2024-12-13 06:58:18.365010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.997 [2024-12-13 06:58:18.365020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:82880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.997 [2024-12-13 06:58:18.365028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.997 [2024-12-13 06:58:18.365037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:112024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.997 [2024-12-13 06:58:18.365045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.997 [2024-12-13 06:58:18.365055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:20456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.997 [2024-12-13 06:58:18.365063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.997 [2024-12-13 06:58:18.365072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:42240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.997 [2024-12-13 06:58:18.365080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.997 [2024-12-13 06:58:18.365090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:58440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.997 [2024-12-13 06:58:18.365098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.997 [2024-12-13 06:58:18.365107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:70976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.997 [2024-12-13 06:58:18.365115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.997 [2024-12-13 06:58:18.365125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:6880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.997 [2024-12-13 06:58:18.365133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.997 [2024-12-13 06:58:18.365142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:72736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.997 [2024-12-13 06:58:18.365150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.997 [2024-12-13 06:58:18.365160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:91992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.997 [2024-12-13 06:58:18.365168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.997 [2024-12-13 06:58:18.365178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:105544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.997 [2024-12-13 06:58:18.365186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.997 [2024-12-13 06:58:18.365196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:9048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.997 [2024-12-13 06:58:18.365204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.997 [2024-12-13 06:58:18.365214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:44440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.997 [2024-12-13 06:58:18.365222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.997 [2024-12-13 06:58:18.365231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:54088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.997 [2024-12-13 06:58:18.365239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.997 [2024-12-13 06:58:18.365249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:48616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.997 [2024-12-13 06:58:18.365257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.997 [2024-12-13 06:58:18.365266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:57984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.997 [2024-12-13 06:58:18.365274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.997 [2024-12-13 06:58:18.365284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:100696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.997 [2024-12-13 06:58:18.365292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.997 [2024-12-13 06:58:18.365301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:63504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.997 [2024-12-13 06:58:18.365309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.997 [2024-12-13 06:58:18.365319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:101104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.997 [2024-12-13 06:58:18.365326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.997 [2024-12-13 06:58:18.365336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:99064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.997 [2024-12-13 06:58:18.365345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.997 [2024-12-13 06:58:18.365355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:1272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.997 [2024-12-13 06:58:18.365378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.997 [2024-12-13 06:58:18.365406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:81664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.997 [2024-12-13 06:58:18.365415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.997 [2024-12-13 06:58:18.365425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:82328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.997 [2024-12-13 06:58:18.365434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.997 [2024-12-13 06:58:18.365444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:53472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.997 [2024-12-13 06:58:18.365452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.997 [2024-12-13 06:58:18.365462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:27488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.997 [2024-12-13 06:58:18.365470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.997 [2024-12-13 06:58:18.365480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:129800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.997 [2024-12-13 06:58:18.365504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.997 [2024-12-13 06:58:18.365514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:89512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.997 [2024-12-13 06:58:18.365522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.997 [2024-12-13 06:58:18.365533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:84456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.997 [2024-12-13 06:58:18.365541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.997 [2024-12-13 06:58:18.365552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:77104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.997 [2024-12-13 06:58:18.365560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.997 [2024-12-13 06:58:18.365570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:55440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.997 [2024-12-13 06:58:18.365578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.997 [2024-12-13 06:58:18.365610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:92272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.997 [2024-12-13 06:58:18.365619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.997 [2024-12-13 06:58:18.365629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:11552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.997 [2024-12-13 06:58:18.365637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.997 [2024-12-13 06:58:18.365647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:36960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.997 [2024-12-13 06:58:18.365655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.997 [2024-12-13 06:58:18.365665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:77168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.997 [2024-12-13 06:58:18.365674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.997 [2024-12-13 06:58:18.365683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:18600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.997 [2024-12-13 06:58:18.365692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.997 [2024-12-13 06:58:18.365702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:125760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.997 [2024-12-13 06:58:18.365710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.997 [2024-12-13 06:58:18.365721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:49264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.997 [2024-12-13 06:58:18.365729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.997 [2024-12-13 06:58:18.365739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:92432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.998 [2024-12-13 06:58:18.365748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.998 [2024-12-13 06:58:18.365758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:86680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.998 [2024-12-13 06:58:18.365766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.998 [2024-12-13 06:58:18.365776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:95840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.998 [2024-12-13 06:58:18.365784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.998 [2024-12-13 06:58:18.365794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:7784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.998 [2024-12-13 06:58:18.365817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.998 [2024-12-13 06:58:18.365827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:83032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.998 [2024-12-13 06:58:18.365836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.998 [2024-12-13 06:58:18.365845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:6136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.998 [2024-12-13 06:58:18.365853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.998 [2024-12-13 06:58:18.365863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:111136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.998 [2024-12-13 06:58:18.365871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.998 [2024-12-13 06:58:18.365881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:69840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.998 [2024-12-13 06:58:18.365889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.998 [2024-12-13 06:58:18.365915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:20904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.998 [2024-12-13 06:58:18.365940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.998 [2024-12-13 06:58:18.365951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:120864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.998 [2024-12-13 06:58:18.365959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.998 [2024-12-13 06:58:18.365970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:99256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.998 [2024-12-13 06:58:18.365978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.998 [2024-12-13 06:58:18.365988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:119712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.998 [2024-12-13 06:58:18.365997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.998 [2024-12-13 06:58:18.366008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:19680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.998 [2024-12-13 06:58:18.366017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.998 [2024-12-13 06:58:18.366042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:118728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.998 [2024-12-13 06:58:18.366051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.998 [2024-12-13 06:58:18.366062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:98744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.998 [2024-12-13 06:58:18.366071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.998 [2024-12-13 06:58:18.366081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:67696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.998 [2024-12-13 06:58:18.366090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.998 [2024-12-13 06:58:18.366101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:93048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.998 [2024-12-13 06:58:18.366110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.998 [2024-12-13 06:58:18.366120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:100664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.998 [2024-12-13 06:58:18.366129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.998 [2024-12-13 06:58:18.366140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:42064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.998 [2024-12-13 06:58:18.366148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.998 [2024-12-13 06:58:18.366159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:26160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.998 [2024-12-13 06:58:18.366168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.998 [2024-12-13 06:58:18.366179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:76856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.998 [2024-12-13 06:58:18.366188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.998 [2024-12-13 06:58:18.366199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:126000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.998 [2024-12-13 06:58:18.366207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.998 [2024-12-13 06:58:18.366218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.998 [2024-12-13 06:58:18.366227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.998 [2024-12-13 06:58:18.366238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:43368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.998 [2024-12-13 06:58:18.366247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.998 [2024-12-13 06:58:18.366258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:100248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.998 [2024-12-13 06:58:18.366282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.998 [2024-12-13 06:58:18.366294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:28152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.998 [2024-12-13 06:58:18.366303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.998 [2024-12-13 06:58:18.366314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:81736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.998 [2024-12-13 06:58:18.366323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.998 [2024-12-13 06:58:18.366338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:123952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.998 [2024-12-13 06:58:18.366356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.998 [2024-12-13 06:58:18.366369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.998 [2024-12-13 06:58:18.366378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.998 [2024-12-13 06:58:18.366389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:33104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.998 [2024-12-13 06:58:18.366398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.998 [2024-12-13 06:58:18.366409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:16240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.998 [2024-12-13 06:58:18.366418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.998 [2024-12-13 06:58:18.366428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.998 [2024-12-13 06:58:18.366437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.998 [2024-12-13 06:58:18.366448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:69416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.998 [2024-12-13 06:58:18.366457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.998 [2024-12-13 06:58:18.366468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:91776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.998 [2024-12-13 06:58:18.366476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.998 [2024-12-13 06:58:18.366487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:119736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.998 [2024-12-13 06:58:18.366496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.998 [2024-12-13 06:58:18.366507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:70592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.998 [2024-12-13 06:58:18.366515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.998 [2024-12-13 06:58:18.366526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:129968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.998 [2024-12-13 06:58:18.366535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.998 [2024-12-13 06:58:18.366546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:31568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.998 [2024-12-13 06:58:18.366555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.998 [2024-12-13 06:58:18.366565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:35784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.998 [2024-12-13 06:58:18.366574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.998 [2024-12-13 06:58:18.366586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:108568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.999 [2024-12-13 06:58:18.366595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.999 [2024-12-13 06:58:18.366606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:41816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.999 [2024-12-13 06:58:18.366614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.999 [2024-12-13 06:58:18.366625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:60144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.999 [2024-12-13 06:58:18.366634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.999 [2024-12-13 06:58:18.366645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:63680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.999 [2024-12-13 06:58:18.366653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.999 [2024-12-13 06:58:18.366664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:29440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.999 [2024-12-13 06:58:18.366672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.999 [2024-12-13 06:58:18.366684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:52752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.999 [2024-12-13 06:58:18.366693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.999 [2024-12-13 06:58:18.366704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:7744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.999 [2024-12-13 06:58:18.366712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.999 [2024-12-13 06:58:18.366723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.999 [2024-12-13 06:58:18.366732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.999 [2024-12-13 06:58:18.366742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:115192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.999 [2024-12-13 06:58:18.366751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.999 [2024-12-13 06:58:18.366762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:32056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.999 [2024-12-13 06:58:18.366771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.999 [2024-12-13 06:58:18.366782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:93040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.999 [2024-12-13 06:58:18.366791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.999 [2024-12-13 06:58:18.366802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:89496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.999 [2024-12-13 06:58:18.366810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.999 [2024-12-13 06:58:18.366821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.999 [2024-12-13 06:58:18.366830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.999 [2024-12-13 06:58:18.366841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:89016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.999 [2024-12-13 06:58:18.366850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.999 [2024-12-13 06:58:18.366861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.999 [2024-12-13 06:58:18.366870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.999 [2024-12-13 06:58:18.366881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:44848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.999 [2024-12-13 06:58:18.366890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.999 [2024-12-13 06:58:18.366901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:122312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.999 [2024-12-13 06:58:18.366910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.999 [2024-12-13 06:58:18.366921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:93680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.999 [2024-12-13 06:58:18.366930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.999 [2024-12-13 06:58:18.366941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:94224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.999 [2024-12-13 06:58:18.366949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.999 [2024-12-13 06:58:18.366960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:76096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.999 [2024-12-13 06:58:18.366968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.999 [2024-12-13 06:58:18.366979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:86288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.999 [2024-12-13 06:58:18.366988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.999 [2024-12-13 06:58:18.366998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.999 [2024-12-13 06:58:18.367007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.999 [2024-12-13 06:58:18.367018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.999 [2024-12-13 06:58:18.367026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.999 [2024-12-13 06:58:18.367043] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c1070 is same with the state(5) to be set 00:19:13.999 [2024-12-13 06:58:18.367055] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:13.999 [2024-12-13 06:58:18.367063] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:13.999 [2024-12-13 06:58:18.367071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111728 len:8 PRP1 0x0 PRP2 0x0 00:19:13.999 [2024-12-13 06:58:18.367080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.999 [2024-12-13 06:58:18.367121] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6c1070 was disconnected and freed. reset controller. 00:19:13.999 [2024-12-13 06:58:18.367386] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:13.999 [2024-12-13 06:58:18.367468] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x68eea0 (9): Bad file descriptor 00:19:13.999 [2024-12-13 06:58:18.367582] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:13.999 [2024-12-13 06:58:18.367656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:13.999 [2024-12-13 06:58:18.367697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:13.999 [2024-12-13 06:58:18.367713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x68eea0 with addr=10.0.0.2, port=4420 00:19:13.999 [2024-12-13 06:58:18.367723] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68eea0 is same with the state(5) to be set 00:19:13.999 [2024-12-13 06:58:18.367740] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x68eea0 (9): Bad file descriptor 00:19:13.999 [2024-12-13 06:58:18.367756] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:13.999 [2024-12-13 06:58:18.367765] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:13.999 [2024-12-13 06:58:18.367774] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:13.999 [2024-12-13 06:58:18.367794] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:13.999 [2024-12-13 06:58:18.367804] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:13.999 06:58:18 -- host/timeout.sh@128 -- # wait 85826 00:19:15.900 [2024-12-13 06:58:20.367985] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:15.900 [2024-12-13 06:58:20.368102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:15.900 [2024-12-13 06:58:20.368145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:15.900 [2024-12-13 06:58:20.368161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x68eea0 with addr=10.0.0.2, port=4420 00:19:15.900 [2024-12-13 06:58:20.368173] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68eea0 is same with the state(5) to be set 00:19:15.900 [2024-12-13 06:58:20.368199] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x68eea0 (9): Bad file descriptor 00:19:15.900 [2024-12-13 06:58:20.368217] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:15.900 [2024-12-13 06:58:20.368226] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:15.900 [2024-12-13 06:58:20.368236] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:15.900 [2024-12-13 06:58:20.368261] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:15.900 [2024-12-13 06:58:20.368272] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:18.431 [2024-12-13 06:58:22.368481] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:18.431 [2024-12-13 06:58:22.368591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:18.431 [2024-12-13 06:58:22.368634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:18.431 [2024-12-13 06:58:22.368649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x68eea0 with addr=10.0.0.2, port=4420 00:19:18.431 [2024-12-13 06:58:22.368661] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68eea0 is same with the state(5) to be set 00:19:18.431 [2024-12-13 06:58:22.368685] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x68eea0 (9): Bad file descriptor 00:19:18.431 [2024-12-13 06:58:22.368703] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:18.431 [2024-12-13 06:58:22.368712] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:18.431 [2024-12-13 06:58:22.368722] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:18.431 [2024-12-13 06:58:22.368747] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:18.431 [2024-12-13 06:58:22.368758] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:20.331 [2024-12-13 06:58:24.368825] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:20.331 [2024-12-13 06:58:24.368892] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:20.331 [2024-12-13 06:58:24.368903] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:20.331 [2024-12-13 06:58:24.368914] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:19:20.331 [2024-12-13 06:58:24.368941] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:20.897 00:19:20.897 Latency(us) 00:19:20.897 [2024-12-13T06:58:25.416Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:20.897 [2024-12-13T06:58:25.416Z] Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:19:20.897 NVMe0n1 : 8.15 2289.62 8.94 15.70 0.00 55483.26 7089.80 7015926.69 00:19:20.897 [2024-12-13T06:58:25.416Z] =================================================================================================================== 00:19:20.897 [2024-12-13T06:58:25.416Z] Total : 2289.62 8.94 15.70 0.00 55483.26 7089.80 7015926.69 00:19:20.897 0 00:19:20.897 06:58:25 -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:20.897 Attaching 5 probes... 00:19:20.897 1290.699228: reset bdev controller NVMe0 00:19:20.897 1290.839709: reconnect bdev controller NVMe0 00:19:20.897 3291.142937: reconnect delay bdev controller NVMe0 00:19:20.897 3291.161566: reconnect bdev controller NVMe0 00:19:20.897 5291.631318: reconnect delay bdev controller NVMe0 00:19:20.897 5291.679310: reconnect bdev controller NVMe0 00:19:20.897 7292.100238: reconnect delay bdev controller NVMe0 00:19:20.897 7292.137217: reconnect bdev controller NVMe0 00:19:20.897 06:58:25 -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:19:20.897 06:58:25 -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:19:20.897 06:58:25 -- host/timeout.sh@136 -- # kill 85790 00:19:20.897 06:58:25 -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:20.897 06:58:25 -- host/timeout.sh@139 -- # killprocess 85774 00:19:20.897 06:58:25 -- common/autotest_common.sh@936 -- # '[' -z 85774 ']' 00:19:20.897 06:58:25 -- common/autotest_common.sh@940 -- # kill -0 85774 00:19:20.897 06:58:25 -- common/autotest_common.sh@941 -- # uname 00:19:20.897 06:58:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:20.897 06:58:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85774 00:19:21.155 killing process with pid 85774 00:19:21.155 Received shutdown signal, test time was about 8.220720 seconds 00:19:21.155 00:19:21.156 Latency(us) 00:19:21.156 [2024-12-13T06:58:25.675Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:21.156 [2024-12-13T06:58:25.675Z] =================================================================================================================== 00:19:21.156 [2024-12-13T06:58:25.675Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:21.156 06:58:25 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:19:21.156 06:58:25 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:19:21.156 06:58:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85774' 00:19:21.156 06:58:25 -- common/autotest_common.sh@955 -- # kill 85774 00:19:21.156 06:58:25 -- common/autotest_common.sh@960 -- # wait 85774 00:19:21.156 06:58:25 -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:21.414 06:58:25 -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:19:21.414 06:58:25 -- host/timeout.sh@145 -- # nvmftestfini 00:19:21.414 06:58:25 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:21.414 06:58:25 -- nvmf/common.sh@116 -- # sync 00:19:21.414 06:58:25 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:21.414 06:58:25 -- nvmf/common.sh@119 -- # set +e 00:19:21.414 06:58:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:21.414 06:58:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:21.414 rmmod nvme_tcp 00:19:21.414 rmmod nvme_fabrics 00:19:21.414 rmmod nvme_keyring 00:19:21.673 06:58:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:21.673 06:58:25 -- nvmf/common.sh@123 -- # set -e 00:19:21.673 06:58:25 -- nvmf/common.sh@124 -- # return 0 00:19:21.673 06:58:25 -- nvmf/common.sh@477 -- # '[' -n 85350 ']' 00:19:21.673 06:58:25 -- nvmf/common.sh@478 -- # killprocess 85350 00:19:21.673 06:58:25 -- common/autotest_common.sh@936 -- # '[' -z 85350 ']' 00:19:21.673 06:58:25 -- common/autotest_common.sh@940 -- # kill -0 85350 00:19:21.673 06:58:25 -- common/autotest_common.sh@941 -- # uname 00:19:21.673 06:58:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:21.673 06:58:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85350 00:19:21.673 killing process with pid 85350 00:19:21.673 06:58:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:21.673 06:58:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:21.673 06:58:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85350' 00:19:21.673 06:58:25 -- common/autotest_common.sh@955 -- # kill 85350 00:19:21.673 06:58:25 -- common/autotest_common.sh@960 -- # wait 85350 00:19:21.673 06:58:26 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:21.673 06:58:26 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:21.673 06:58:26 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:21.673 06:58:26 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:21.673 06:58:26 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:21.673 06:58:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:21.673 06:58:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:21.673 06:58:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:21.673 06:58:26 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:21.673 00:19:21.673 real 0m45.474s 00:19:21.673 user 2m14.388s 00:19:21.673 sys 0m5.148s 00:19:21.673 06:58:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:21.673 06:58:26 -- common/autotest_common.sh@10 -- # set +x 00:19:21.673 ************************************ 00:19:21.673 END TEST nvmf_timeout 00:19:21.673 ************************************ 00:19:21.932 06:58:26 -- nvmf/nvmf.sh@120 -- # [[ virt == phy ]] 00:19:21.932 06:58:26 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:19:21.932 06:58:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:21.932 06:58:26 -- common/autotest_common.sh@10 -- # set +x 00:19:21.932 06:58:26 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:19:21.932 ************************************ 00:19:21.932 END TEST nvmf_tcp 00:19:21.932 ************************************ 00:19:21.932 00:19:21.932 real 10m19.083s 00:19:21.932 user 29m2.290s 00:19:21.932 sys 3m21.174s 00:19:21.932 06:58:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:21.932 06:58:26 -- common/autotest_common.sh@10 -- # set +x 00:19:21.932 06:58:26 -- spdk/autotest.sh@283 -- # [[ 1 -eq 0 ]] 00:19:21.932 06:58:26 -- spdk/autotest.sh@287 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:19:21.932 06:58:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:21.932 06:58:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:21.932 06:58:26 -- common/autotest_common.sh@10 -- # set +x 00:19:21.932 ************************************ 00:19:21.932 START TEST nvmf_dif 00:19:21.932 ************************************ 00:19:21.932 06:58:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:19:21.932 * Looking for test storage... 00:19:21.932 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:21.932 06:58:26 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:21.932 06:58:26 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:21.932 06:58:26 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:22.190 06:58:26 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:22.190 06:58:26 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:22.190 06:58:26 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:22.190 06:58:26 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:22.190 06:58:26 -- scripts/common.sh@335 -- # IFS=.-: 00:19:22.190 06:58:26 -- scripts/common.sh@335 -- # read -ra ver1 00:19:22.190 06:58:26 -- scripts/common.sh@336 -- # IFS=.-: 00:19:22.190 06:58:26 -- scripts/common.sh@336 -- # read -ra ver2 00:19:22.190 06:58:26 -- scripts/common.sh@337 -- # local 'op=<' 00:19:22.191 06:58:26 -- scripts/common.sh@339 -- # ver1_l=2 00:19:22.191 06:58:26 -- scripts/common.sh@340 -- # ver2_l=1 00:19:22.191 06:58:26 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:22.191 06:58:26 -- scripts/common.sh@343 -- # case "$op" in 00:19:22.191 06:58:26 -- scripts/common.sh@344 -- # : 1 00:19:22.191 06:58:26 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:22.191 06:58:26 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:22.191 06:58:26 -- scripts/common.sh@364 -- # decimal 1 00:19:22.191 06:58:26 -- scripts/common.sh@352 -- # local d=1 00:19:22.191 06:58:26 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:22.191 06:58:26 -- scripts/common.sh@354 -- # echo 1 00:19:22.191 06:58:26 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:22.191 06:58:26 -- scripts/common.sh@365 -- # decimal 2 00:19:22.191 06:58:26 -- scripts/common.sh@352 -- # local d=2 00:19:22.191 06:58:26 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:22.191 06:58:26 -- scripts/common.sh@354 -- # echo 2 00:19:22.191 06:58:26 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:22.191 06:58:26 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:22.191 06:58:26 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:22.191 06:58:26 -- scripts/common.sh@367 -- # return 0 00:19:22.191 06:58:26 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:22.191 06:58:26 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:22.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:22.191 --rc genhtml_branch_coverage=1 00:19:22.191 --rc genhtml_function_coverage=1 00:19:22.191 --rc genhtml_legend=1 00:19:22.191 --rc geninfo_all_blocks=1 00:19:22.191 --rc geninfo_unexecuted_blocks=1 00:19:22.191 00:19:22.191 ' 00:19:22.191 06:58:26 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:22.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:22.191 --rc genhtml_branch_coverage=1 00:19:22.191 --rc genhtml_function_coverage=1 00:19:22.191 --rc genhtml_legend=1 00:19:22.191 --rc geninfo_all_blocks=1 00:19:22.191 --rc geninfo_unexecuted_blocks=1 00:19:22.191 00:19:22.191 ' 00:19:22.191 06:58:26 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:22.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:22.191 --rc genhtml_branch_coverage=1 00:19:22.191 --rc genhtml_function_coverage=1 00:19:22.191 --rc genhtml_legend=1 00:19:22.191 --rc geninfo_all_blocks=1 00:19:22.191 --rc geninfo_unexecuted_blocks=1 00:19:22.191 00:19:22.191 ' 00:19:22.191 06:58:26 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:22.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:22.191 --rc genhtml_branch_coverage=1 00:19:22.191 --rc genhtml_function_coverage=1 00:19:22.191 --rc genhtml_legend=1 00:19:22.191 --rc geninfo_all_blocks=1 00:19:22.191 --rc geninfo_unexecuted_blocks=1 00:19:22.191 00:19:22.191 ' 00:19:22.191 06:58:26 -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:22.191 06:58:26 -- nvmf/common.sh@7 -- # uname -s 00:19:22.191 06:58:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:22.191 06:58:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:22.191 06:58:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:22.191 06:58:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:22.191 06:58:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:22.191 06:58:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:22.191 06:58:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:22.191 06:58:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:22.191 06:58:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:22.191 06:58:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:22.191 06:58:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:657f0c9c-3891-4064-9841-3d87a573b6e7 00:19:22.191 06:58:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=657f0c9c-3891-4064-9841-3d87a573b6e7 00:19:22.191 06:58:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:22.191 06:58:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:22.191 06:58:26 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:22.191 06:58:26 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:22.191 06:58:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:22.191 06:58:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:22.191 06:58:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:22.191 06:58:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.191 06:58:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.191 06:58:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.191 06:58:26 -- paths/export.sh@5 -- # export PATH 00:19:22.191 06:58:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.191 06:58:26 -- nvmf/common.sh@46 -- # : 0 00:19:22.191 06:58:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:22.191 06:58:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:22.191 06:58:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:22.191 06:58:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:22.191 06:58:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:22.191 06:58:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:22.191 06:58:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:22.191 06:58:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:22.191 06:58:26 -- target/dif.sh@15 -- # NULL_META=16 00:19:22.191 06:58:26 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:19:22.191 06:58:26 -- target/dif.sh@15 -- # NULL_SIZE=64 00:19:22.191 06:58:26 -- target/dif.sh@15 -- # NULL_DIF=1 00:19:22.191 06:58:26 -- target/dif.sh@135 -- # nvmftestinit 00:19:22.191 06:58:26 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:22.191 06:58:26 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:22.191 06:58:26 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:22.191 06:58:26 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:22.191 06:58:26 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:22.191 06:58:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:22.191 06:58:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:19:22.191 06:58:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:22.191 06:58:26 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:22.191 06:58:26 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:22.191 06:58:26 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:22.191 06:58:26 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:22.191 06:58:26 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:22.191 06:58:26 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:22.191 06:58:26 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:22.191 06:58:26 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:22.191 06:58:26 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:22.191 06:58:26 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:22.191 06:58:26 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:22.191 06:58:26 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:22.191 06:58:26 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:22.191 06:58:26 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:22.191 06:58:26 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:22.191 06:58:26 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:22.191 06:58:26 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:22.191 06:58:26 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:22.191 06:58:26 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:22.191 06:58:26 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:22.191 Cannot find device "nvmf_tgt_br" 00:19:22.191 06:58:26 -- nvmf/common.sh@154 -- # true 00:19:22.191 06:58:26 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:22.191 Cannot find device "nvmf_tgt_br2" 00:19:22.191 06:58:26 -- nvmf/common.sh@155 -- # true 00:19:22.191 06:58:26 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:22.191 06:58:26 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:22.191 Cannot find device "nvmf_tgt_br" 00:19:22.191 06:58:26 -- nvmf/common.sh@157 -- # true 00:19:22.191 06:58:26 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:22.191 Cannot find device "nvmf_tgt_br2" 00:19:22.191 06:58:26 -- nvmf/common.sh@158 -- # true 00:19:22.191 06:58:26 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:22.191 06:58:26 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:22.191 06:58:26 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:22.191 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:22.191 06:58:26 -- nvmf/common.sh@161 -- # true 00:19:22.191 06:58:26 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:22.191 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:22.191 06:58:26 -- nvmf/common.sh@162 -- # true 00:19:22.191 06:58:26 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:22.191 06:58:26 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:22.191 06:58:26 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:22.191 06:58:26 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:22.191 06:58:26 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:22.450 06:58:26 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:22.450 06:58:26 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:22.450 06:58:26 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:22.450 06:58:26 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:22.450 06:58:26 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:22.450 06:58:26 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:22.450 06:58:26 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:22.450 06:58:26 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:22.450 06:58:26 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:22.450 06:58:26 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:22.450 06:58:26 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:22.450 06:58:26 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:22.450 06:58:26 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:22.450 06:58:26 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:22.450 06:58:26 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:22.450 06:58:26 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:22.450 06:58:26 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:22.450 06:58:26 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:22.450 06:58:26 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:22.450 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:22.450 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:19:22.450 00:19:22.450 --- 10.0.0.2 ping statistics --- 00:19:22.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:22.450 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:19:22.450 06:58:26 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:22.450 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:22.450 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:19:22.450 00:19:22.450 --- 10.0.0.3 ping statistics --- 00:19:22.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:22.450 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:19:22.450 06:58:26 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:22.450 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:22.450 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:19:22.450 00:19:22.450 --- 10.0.0.1 ping statistics --- 00:19:22.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:22.450 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:19:22.450 06:58:26 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:22.451 06:58:26 -- nvmf/common.sh@421 -- # return 0 00:19:22.451 06:58:26 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:19:22.451 06:58:26 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:22.709 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:22.709 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:22.709 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:22.968 06:58:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:22.969 06:58:27 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:22.969 06:58:27 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:22.969 06:58:27 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:22.969 06:58:27 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:22.969 06:58:27 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:22.969 06:58:27 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:19:22.969 06:58:27 -- target/dif.sh@137 -- # nvmfappstart 00:19:22.969 06:58:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:22.969 06:58:27 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:22.969 06:58:27 -- common/autotest_common.sh@10 -- # set +x 00:19:22.969 06:58:27 -- nvmf/common.sh@469 -- # nvmfpid=86272 00:19:22.969 06:58:27 -- nvmf/common.sh@470 -- # waitforlisten 86272 00:19:22.969 06:58:27 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:22.969 06:58:27 -- common/autotest_common.sh@829 -- # '[' -z 86272 ']' 00:19:22.969 06:58:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:22.969 06:58:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:22.969 06:58:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:22.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:22.969 06:58:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:22.969 06:58:27 -- common/autotest_common.sh@10 -- # set +x 00:19:22.969 [2024-12-13 06:58:27.320399] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:22.969 [2024-12-13 06:58:27.320512] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:22.969 [2024-12-13 06:58:27.463612] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:23.227 [2024-12-13 06:58:27.504096] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:23.227 [2024-12-13 06:58:27.504259] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:23.227 [2024-12-13 06:58:27.504280] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:23.227 [2024-12-13 06:58:27.504291] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:23.227 [2024-12-13 06:58:27.504334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:24.189 06:58:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:24.189 06:58:28 -- common/autotest_common.sh@862 -- # return 0 00:19:24.189 06:58:28 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:24.189 06:58:28 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:24.189 06:58:28 -- common/autotest_common.sh@10 -- # set +x 00:19:24.189 06:58:28 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:24.189 06:58:28 -- target/dif.sh@139 -- # create_transport 00:19:24.189 06:58:28 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:19:24.189 06:58:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.189 06:58:28 -- common/autotest_common.sh@10 -- # set +x 00:19:24.189 [2024-12-13 06:58:28.407026] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:24.189 06:58:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.189 06:58:28 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:19:24.189 06:58:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:24.189 06:58:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:24.189 06:58:28 -- common/autotest_common.sh@10 -- # set +x 00:19:24.189 ************************************ 00:19:24.189 START TEST fio_dif_1_default 00:19:24.189 ************************************ 00:19:24.189 06:58:28 -- common/autotest_common.sh@1114 -- # fio_dif_1 00:19:24.189 06:58:28 -- target/dif.sh@86 -- # create_subsystems 0 00:19:24.189 06:58:28 -- target/dif.sh@28 -- # local sub 00:19:24.189 06:58:28 -- target/dif.sh@30 -- # for sub in "$@" 00:19:24.189 06:58:28 -- target/dif.sh@31 -- # create_subsystem 0 00:19:24.189 06:58:28 -- target/dif.sh@18 -- # local sub_id=0 00:19:24.189 06:58:28 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:19:24.189 06:58:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.189 06:58:28 -- common/autotest_common.sh@10 -- # set +x 00:19:24.189 bdev_null0 00:19:24.189 06:58:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.189 06:58:28 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:24.189 06:58:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.189 06:58:28 -- common/autotest_common.sh@10 -- # set +x 00:19:24.189 06:58:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.189 06:58:28 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:24.189 06:58:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.189 06:58:28 -- common/autotest_common.sh@10 -- # set +x 00:19:24.189 06:58:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.189 06:58:28 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:24.189 06:58:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.189 06:58:28 -- common/autotest_common.sh@10 -- # set +x 00:19:24.189 [2024-12-13 06:58:28.451123] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:24.189 06:58:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.190 06:58:28 -- target/dif.sh@87 -- # fio /dev/fd/62 00:19:24.190 06:58:28 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:19:24.190 06:58:28 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:19:24.190 06:58:28 -- nvmf/common.sh@520 -- # config=() 00:19:24.190 06:58:28 -- nvmf/common.sh@520 -- # local subsystem config 00:19:24.190 06:58:28 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:24.190 06:58:28 -- target/dif.sh@82 -- # gen_fio_conf 00:19:24.190 06:58:28 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:24.190 06:58:28 -- target/dif.sh@54 -- # local file 00:19:24.190 06:58:28 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:19:24.190 06:58:28 -- target/dif.sh@56 -- # cat 00:19:24.190 06:58:28 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:24.190 06:58:28 -- common/autotest_common.sh@1328 -- # local sanitizers 00:19:24.190 06:58:28 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:24.190 06:58:28 -- common/autotest_common.sh@1330 -- # shift 00:19:24.190 06:58:28 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:19:24.190 06:58:28 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:24.190 06:58:28 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:24.190 06:58:28 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:24.190 { 00:19:24.190 "params": { 00:19:24.190 "name": "Nvme$subsystem", 00:19:24.190 "trtype": "$TEST_TRANSPORT", 00:19:24.190 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:24.190 "adrfam": "ipv4", 00:19:24.190 "trsvcid": "$NVMF_PORT", 00:19:24.190 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:24.190 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:24.190 "hdgst": ${hdgst:-false}, 00:19:24.190 "ddgst": ${ddgst:-false} 00:19:24.190 }, 00:19:24.190 "method": "bdev_nvme_attach_controller" 00:19:24.190 } 00:19:24.190 EOF 00:19:24.190 )") 00:19:24.190 06:58:28 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:24.190 06:58:28 -- target/dif.sh@72 -- # (( file = 1 )) 00:19:24.190 06:58:28 -- nvmf/common.sh@542 -- # cat 00:19:24.190 06:58:28 -- target/dif.sh@72 -- # (( file <= files )) 00:19:24.190 06:58:28 -- common/autotest_common.sh@1334 -- # grep libasan 00:19:24.190 06:58:28 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:24.190 06:58:28 -- nvmf/common.sh@544 -- # jq . 00:19:24.190 06:58:28 -- nvmf/common.sh@545 -- # IFS=, 00:19:24.190 06:58:28 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:24.190 "params": { 00:19:24.190 "name": "Nvme0", 00:19:24.190 "trtype": "tcp", 00:19:24.190 "traddr": "10.0.0.2", 00:19:24.190 "adrfam": "ipv4", 00:19:24.190 "trsvcid": "4420", 00:19:24.190 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:24.190 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:24.190 "hdgst": false, 00:19:24.190 "ddgst": false 00:19:24.190 }, 00:19:24.190 "method": "bdev_nvme_attach_controller" 00:19:24.190 }' 00:19:24.190 06:58:28 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:24.190 06:58:28 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:24.190 06:58:28 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:24.190 06:58:28 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:24.190 06:58:28 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:19:24.190 06:58:28 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:24.190 06:58:28 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:24.190 06:58:28 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:24.190 06:58:28 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:24.190 06:58:28 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:24.190 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:19:24.190 fio-3.35 00:19:24.190 Starting 1 thread 00:19:24.760 [2024-12-13 06:58:29.003455] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:24.760 [2024-12-13 06:58:29.003542] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:34.738 00:19:34.738 filename0: (groupid=0, jobs=1): err= 0: pid=86339: Fri Dec 13 06:58:39 2024 00:19:34.738 read: IOPS=9273, BW=36.2MiB/s (38.0MB/s)(362MiB/10001msec) 00:19:34.738 slat (usec): min=5, max=240, avg= 8.12, stdev= 3.56 00:19:34.738 clat (usec): min=309, max=4453, avg=407.54, stdev=53.54 00:19:34.738 lat (usec): min=315, max=4478, avg=415.66, stdev=54.26 00:19:34.738 clat percentiles (usec): 00:19:34.738 | 1.00th=[ 338], 5.00th=[ 347], 10.00th=[ 355], 20.00th=[ 367], 00:19:34.738 | 30.00th=[ 379], 40.00th=[ 392], 50.00th=[ 400], 60.00th=[ 412], 00:19:34.738 | 70.00th=[ 429], 80.00th=[ 445], 90.00th=[ 469], 95.00th=[ 486], 00:19:34.738 | 99.00th=[ 523], 99.50th=[ 537], 99.90th=[ 570], 99.95th=[ 594], 00:19:34.738 | 99.99th=[ 889] 00:19:34.738 bw ( KiB/s): min=35776, max=38464, per=100.00%, avg=37153.68, stdev=833.13, samples=19 00:19:34.738 iops : min= 8944, max= 9616, avg=9288.42, stdev=208.28, samples=19 00:19:34.738 lat (usec) : 500=97.05%, 750=2.93%, 1000=0.01% 00:19:34.738 lat (msec) : 4=0.01%, 10=0.01% 00:19:34.738 cpu : usr=85.25%, sys=12.85%, ctx=22, majf=0, minf=0 00:19:34.738 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:34.738 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.738 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.738 issued rwts: total=92748,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.738 latency : target=0, window=0, percentile=100.00%, depth=4 00:19:34.738 00:19:34.738 Run status group 0 (all jobs): 00:19:34.738 READ: bw=36.2MiB/s (38.0MB/s), 36.2MiB/s-36.2MiB/s (38.0MB/s-38.0MB/s), io=362MiB (380MB), run=10001-10001msec 00:19:34.997 06:58:39 -- target/dif.sh@88 -- # destroy_subsystems 0 00:19:34.997 06:58:39 -- target/dif.sh@43 -- # local sub 00:19:34.997 06:58:39 -- target/dif.sh@45 -- # for sub in "$@" 00:19:34.997 06:58:39 -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:34.997 06:58:39 -- target/dif.sh@36 -- # local sub_id=0 00:19:34.997 06:58:39 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:34.997 06:58:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.997 06:58:39 -- common/autotest_common.sh@10 -- # set +x 00:19:34.997 06:58:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.997 06:58:39 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:34.997 06:58:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.997 06:58:39 -- common/autotest_common.sh@10 -- # set +x 00:19:34.997 06:58:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.997 00:19:34.997 real 0m10.864s 00:19:34.997 user 0m9.081s 00:19:34.997 sys 0m1.516s 00:19:34.997 06:58:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:34.997 06:58:39 -- common/autotest_common.sh@10 -- # set +x 00:19:34.997 ************************************ 00:19:34.997 END TEST fio_dif_1_default 00:19:34.997 ************************************ 00:19:34.997 06:58:39 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:19:34.997 06:58:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:34.997 06:58:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:34.997 06:58:39 -- common/autotest_common.sh@10 -- # set +x 00:19:34.997 ************************************ 00:19:34.997 START TEST fio_dif_1_multi_subsystems 00:19:34.997 ************************************ 00:19:34.997 06:58:39 -- common/autotest_common.sh@1114 -- # fio_dif_1_multi_subsystems 00:19:34.997 06:58:39 -- target/dif.sh@92 -- # local files=1 00:19:34.997 06:58:39 -- target/dif.sh@94 -- # create_subsystems 0 1 00:19:34.997 06:58:39 -- target/dif.sh@28 -- # local sub 00:19:34.997 06:58:39 -- target/dif.sh@30 -- # for sub in "$@" 00:19:34.997 06:58:39 -- target/dif.sh@31 -- # create_subsystem 0 00:19:34.997 06:58:39 -- target/dif.sh@18 -- # local sub_id=0 00:19:34.997 06:58:39 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:19:34.998 06:58:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.998 06:58:39 -- common/autotest_common.sh@10 -- # set +x 00:19:34.998 bdev_null0 00:19:34.998 06:58:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.998 06:58:39 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:34.998 06:58:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.998 06:58:39 -- common/autotest_common.sh@10 -- # set +x 00:19:34.998 06:58:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.998 06:58:39 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:34.998 06:58:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.998 06:58:39 -- common/autotest_common.sh@10 -- # set +x 00:19:34.998 06:58:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.998 06:58:39 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:34.998 06:58:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.998 06:58:39 -- common/autotest_common.sh@10 -- # set +x 00:19:34.998 [2024-12-13 06:58:39.370036] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:34.998 06:58:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.998 06:58:39 -- target/dif.sh@30 -- # for sub in "$@" 00:19:34.998 06:58:39 -- target/dif.sh@31 -- # create_subsystem 1 00:19:34.998 06:58:39 -- target/dif.sh@18 -- # local sub_id=1 00:19:34.998 06:58:39 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:19:34.998 06:58:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.998 06:58:39 -- common/autotest_common.sh@10 -- # set +x 00:19:34.998 bdev_null1 00:19:34.998 06:58:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.998 06:58:39 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:19:34.998 06:58:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.998 06:58:39 -- common/autotest_common.sh@10 -- # set +x 00:19:34.998 06:58:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.998 06:58:39 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:19:34.998 06:58:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.998 06:58:39 -- common/autotest_common.sh@10 -- # set +x 00:19:34.998 06:58:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.998 06:58:39 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:34.998 06:58:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.998 06:58:39 -- common/autotest_common.sh@10 -- # set +x 00:19:34.998 06:58:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.998 06:58:39 -- target/dif.sh@95 -- # fio /dev/fd/62 00:19:34.998 06:58:39 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:19:34.998 06:58:39 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:19:34.998 06:58:39 -- nvmf/common.sh@520 -- # config=() 00:19:34.998 06:58:39 -- nvmf/common.sh@520 -- # local subsystem config 00:19:34.998 06:58:39 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:34.998 06:58:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:34.998 06:58:39 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:34.998 06:58:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:34.998 { 00:19:34.998 "params": { 00:19:34.998 "name": "Nvme$subsystem", 00:19:34.998 "trtype": "$TEST_TRANSPORT", 00:19:34.998 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:34.998 "adrfam": "ipv4", 00:19:34.998 "trsvcid": "$NVMF_PORT", 00:19:34.998 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:34.998 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:34.998 "hdgst": ${hdgst:-false}, 00:19:34.998 "ddgst": ${ddgst:-false} 00:19:34.998 }, 00:19:34.998 "method": "bdev_nvme_attach_controller" 00:19:34.998 } 00:19:34.998 EOF 00:19:34.998 )") 00:19:34.998 06:58:39 -- target/dif.sh@82 -- # gen_fio_conf 00:19:34.998 06:58:39 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:19:34.998 06:58:39 -- target/dif.sh@54 -- # local file 00:19:34.998 06:58:39 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:34.998 06:58:39 -- target/dif.sh@56 -- # cat 00:19:34.998 06:58:39 -- common/autotest_common.sh@1328 -- # local sanitizers 00:19:34.998 06:58:39 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:34.998 06:58:39 -- common/autotest_common.sh@1330 -- # shift 00:19:34.998 06:58:39 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:19:34.998 06:58:39 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:34.998 06:58:39 -- nvmf/common.sh@542 -- # cat 00:19:34.998 06:58:39 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:34.998 06:58:39 -- common/autotest_common.sh@1334 -- # grep libasan 00:19:34.998 06:58:39 -- target/dif.sh@72 -- # (( file = 1 )) 00:19:34.998 06:58:39 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:34.998 06:58:39 -- target/dif.sh@72 -- # (( file <= files )) 00:19:34.998 06:58:39 -- target/dif.sh@73 -- # cat 00:19:34.998 06:58:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:34.998 06:58:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:34.998 { 00:19:34.998 "params": { 00:19:34.998 "name": "Nvme$subsystem", 00:19:34.998 "trtype": "$TEST_TRANSPORT", 00:19:34.998 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:34.998 "adrfam": "ipv4", 00:19:34.998 "trsvcid": "$NVMF_PORT", 00:19:34.998 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:34.998 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:34.998 "hdgst": ${hdgst:-false}, 00:19:34.998 "ddgst": ${ddgst:-false} 00:19:34.998 }, 00:19:34.998 "method": "bdev_nvme_attach_controller" 00:19:34.998 } 00:19:34.998 EOF 00:19:34.998 )") 00:19:34.998 06:58:39 -- target/dif.sh@72 -- # (( file++ )) 00:19:34.998 06:58:39 -- target/dif.sh@72 -- # (( file <= files )) 00:19:34.998 06:58:39 -- nvmf/common.sh@542 -- # cat 00:19:34.998 06:58:39 -- nvmf/common.sh@544 -- # jq . 00:19:34.998 06:58:39 -- nvmf/common.sh@545 -- # IFS=, 00:19:34.998 06:58:39 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:34.998 "params": { 00:19:34.998 "name": "Nvme0", 00:19:34.998 "trtype": "tcp", 00:19:34.998 "traddr": "10.0.0.2", 00:19:34.998 "adrfam": "ipv4", 00:19:34.998 "trsvcid": "4420", 00:19:34.998 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:34.998 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:34.998 "hdgst": false, 00:19:34.998 "ddgst": false 00:19:34.998 }, 00:19:34.998 "method": "bdev_nvme_attach_controller" 00:19:34.998 },{ 00:19:34.998 "params": { 00:19:34.998 "name": "Nvme1", 00:19:34.998 "trtype": "tcp", 00:19:34.998 "traddr": "10.0.0.2", 00:19:34.998 "adrfam": "ipv4", 00:19:34.998 "trsvcid": "4420", 00:19:34.998 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:34.998 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:34.998 "hdgst": false, 00:19:34.998 "ddgst": false 00:19:34.998 }, 00:19:34.998 "method": "bdev_nvme_attach_controller" 00:19:34.998 }' 00:19:34.998 06:58:39 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:34.998 06:58:39 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:34.998 06:58:39 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:34.998 06:58:39 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:34.998 06:58:39 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:19:34.998 06:58:39 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:34.998 06:58:39 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:34.998 06:58:39 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:34.998 06:58:39 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:34.998 06:58:39 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:35.258 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:19:35.258 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:19:35.258 fio-3.35 00:19:35.258 Starting 2 threads 00:19:35.516 [2024-12-13 06:58:39.998181] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:35.516 [2024-12-13 06:58:39.998275] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:47.719 00:19:47.719 filename0: (groupid=0, jobs=1): err= 0: pid=86504: Fri Dec 13 06:58:50 2024 00:19:47.719 read: IOPS=5121, BW=20.0MiB/s (21.0MB/s)(200MiB/10001msec) 00:19:47.719 slat (usec): min=6, max=108, avg=13.24, stdev= 5.21 00:19:47.719 clat (usec): min=387, max=3648, avg=744.95, stdev=74.14 00:19:47.719 lat (usec): min=393, max=3670, avg=758.18, stdev=75.17 00:19:47.719 clat percentiles (usec): 00:19:47.719 | 1.00th=[ 619], 5.00th=[ 644], 10.00th=[ 660], 20.00th=[ 685], 00:19:47.719 | 30.00th=[ 701], 40.00th=[ 717], 50.00th=[ 734], 60.00th=[ 758], 00:19:47.719 | 70.00th=[ 775], 80.00th=[ 807], 90.00th=[ 840], 95.00th=[ 873], 00:19:47.719 | 99.00th=[ 922], 99.50th=[ 938], 99.90th=[ 988], 99.95th=[ 1020], 00:19:47.719 | 99.99th=[ 1156] 00:19:47.719 bw ( KiB/s): min=19776, max=21472, per=50.08%, avg=20516.84, stdev=559.37, samples=19 00:19:47.719 iops : min= 4944, max= 5368, avg=5129.16, stdev=139.76, samples=19 00:19:47.719 lat (usec) : 500=0.02%, 750=57.85%, 1000=42.05% 00:19:47.719 lat (msec) : 2=0.06%, 4=0.01% 00:19:47.719 cpu : usr=89.34%, sys=9.11%, ctx=50, majf=0, minf=0 00:19:47.719 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:47.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:47.719 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:47.719 issued rwts: total=51224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:47.719 latency : target=0, window=0, percentile=100.00%, depth=4 00:19:47.719 filename1: (groupid=0, jobs=1): err= 0: pid=86505: Fri Dec 13 06:58:50 2024 00:19:47.719 read: IOPS=5120, BW=20.0MiB/s (21.0MB/s)(200MiB/10001msec) 00:19:47.719 slat (usec): min=4, max=304, avg=13.49, stdev= 5.58 00:19:47.719 clat (usec): min=461, max=3717, avg=743.35, stdev=71.56 00:19:47.719 lat (usec): min=484, max=3739, avg=756.83, stdev=72.38 00:19:47.719 clat percentiles (usec): 00:19:47.719 | 1.00th=[ 635], 5.00th=[ 652], 10.00th=[ 668], 20.00th=[ 685], 00:19:47.719 | 30.00th=[ 701], 40.00th=[ 717], 50.00th=[ 734], 60.00th=[ 750], 00:19:47.719 | 70.00th=[ 775], 80.00th=[ 799], 90.00th=[ 840], 95.00th=[ 865], 00:19:47.719 | 99.00th=[ 914], 99.50th=[ 930], 99.90th=[ 1004], 99.95th=[ 1090], 00:19:47.719 | 99.99th=[ 1352] 00:19:47.719 bw ( KiB/s): min=19776, max=21440, per=50.07%, avg=20513.47, stdev=559.63, samples=19 00:19:47.719 iops : min= 4944, max= 5360, avg=5128.32, stdev=139.82, samples=19 00:19:47.719 lat (usec) : 500=0.01%, 750=59.32%, 1000=40.56% 00:19:47.719 lat (msec) : 2=0.10%, 4=0.01% 00:19:47.719 cpu : usr=90.09%, sys=8.36%, ctx=46, majf=0, minf=0 00:19:47.719 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:47.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:47.719 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:47.719 issued rwts: total=51212,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:47.719 latency : target=0, window=0, percentile=100.00%, depth=4 00:19:47.719 00:19:47.719 Run status group 0 (all jobs): 00:19:47.719 READ: bw=40.0MiB/s (42.0MB/s), 20.0MiB/s-20.0MiB/s (21.0MB/s-21.0MB/s), io=400MiB (420MB), run=10001-10001msec 00:19:47.719 06:58:50 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:19:47.719 06:58:50 -- target/dif.sh@43 -- # local sub 00:19:47.719 06:58:50 -- target/dif.sh@45 -- # for sub in "$@" 00:19:47.719 06:58:50 -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:47.719 06:58:50 -- target/dif.sh@36 -- # local sub_id=0 00:19:47.719 06:58:50 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:47.719 06:58:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.719 06:58:50 -- common/autotest_common.sh@10 -- # set +x 00:19:47.719 06:58:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.719 06:58:50 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:47.719 06:58:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.719 06:58:50 -- common/autotest_common.sh@10 -- # set +x 00:19:47.719 06:58:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.719 06:58:50 -- target/dif.sh@45 -- # for sub in "$@" 00:19:47.719 06:58:50 -- target/dif.sh@46 -- # destroy_subsystem 1 00:19:47.719 06:58:50 -- target/dif.sh@36 -- # local sub_id=1 00:19:47.719 06:58:50 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:47.719 06:58:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.719 06:58:50 -- common/autotest_common.sh@10 -- # set +x 00:19:47.719 06:58:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.719 06:58:50 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:19:47.719 06:58:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.719 06:58:50 -- common/autotest_common.sh@10 -- # set +x 00:19:47.719 06:58:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.719 00:19:47.719 real 0m10.947s 00:19:47.719 user 0m18.558s 00:19:47.719 sys 0m1.964s 00:19:47.719 06:58:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:47.719 ************************************ 00:19:47.719 END TEST fio_dif_1_multi_subsystems 00:19:47.719 ************************************ 00:19:47.719 06:58:50 -- common/autotest_common.sh@10 -- # set +x 00:19:47.719 06:58:50 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:19:47.719 06:58:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:47.719 06:58:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:47.719 06:58:50 -- common/autotest_common.sh@10 -- # set +x 00:19:47.719 ************************************ 00:19:47.719 START TEST fio_dif_rand_params 00:19:47.719 ************************************ 00:19:47.719 06:58:50 -- common/autotest_common.sh@1114 -- # fio_dif_rand_params 00:19:47.719 06:58:50 -- target/dif.sh@100 -- # local NULL_DIF 00:19:47.719 06:58:50 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:19:47.719 06:58:50 -- target/dif.sh@103 -- # NULL_DIF=3 00:19:47.719 06:58:50 -- target/dif.sh@103 -- # bs=128k 00:19:47.719 06:58:50 -- target/dif.sh@103 -- # numjobs=3 00:19:47.719 06:58:50 -- target/dif.sh@103 -- # iodepth=3 00:19:47.719 06:58:50 -- target/dif.sh@103 -- # runtime=5 00:19:47.719 06:58:50 -- target/dif.sh@105 -- # create_subsystems 0 00:19:47.719 06:58:50 -- target/dif.sh@28 -- # local sub 00:19:47.719 06:58:50 -- target/dif.sh@30 -- # for sub in "$@" 00:19:47.719 06:58:50 -- target/dif.sh@31 -- # create_subsystem 0 00:19:47.719 06:58:50 -- target/dif.sh@18 -- # local sub_id=0 00:19:47.719 06:58:50 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:19:47.719 06:58:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.719 06:58:50 -- common/autotest_common.sh@10 -- # set +x 00:19:47.719 bdev_null0 00:19:47.719 06:58:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.719 06:58:50 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:47.719 06:58:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.719 06:58:50 -- common/autotest_common.sh@10 -- # set +x 00:19:47.719 06:58:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.719 06:58:50 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:47.719 06:58:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.719 06:58:50 -- common/autotest_common.sh@10 -- # set +x 00:19:47.719 06:58:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.719 06:58:50 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:47.719 06:58:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.719 06:58:50 -- common/autotest_common.sh@10 -- # set +x 00:19:47.719 [2024-12-13 06:58:50.378886] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:47.719 06:58:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.719 06:58:50 -- target/dif.sh@106 -- # fio /dev/fd/62 00:19:47.719 06:58:50 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:19:47.719 06:58:50 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:19:47.719 06:58:50 -- nvmf/common.sh@520 -- # config=() 00:19:47.719 06:58:50 -- nvmf/common.sh@520 -- # local subsystem config 00:19:47.719 06:58:50 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:47.720 06:58:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:47.720 06:58:50 -- target/dif.sh@82 -- # gen_fio_conf 00:19:47.720 06:58:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:47.720 { 00:19:47.720 "params": { 00:19:47.720 "name": "Nvme$subsystem", 00:19:47.720 "trtype": "$TEST_TRANSPORT", 00:19:47.720 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:47.720 "adrfam": "ipv4", 00:19:47.720 "trsvcid": "$NVMF_PORT", 00:19:47.720 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:47.720 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:47.720 "hdgst": ${hdgst:-false}, 00:19:47.720 "ddgst": ${ddgst:-false} 00:19:47.720 }, 00:19:47.720 "method": "bdev_nvme_attach_controller" 00:19:47.720 } 00:19:47.720 EOF 00:19:47.720 )") 00:19:47.720 06:58:50 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:47.720 06:58:50 -- target/dif.sh@54 -- # local file 00:19:47.720 06:58:50 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:19:47.720 06:58:50 -- target/dif.sh@56 -- # cat 00:19:47.720 06:58:50 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:47.720 06:58:50 -- common/autotest_common.sh@1328 -- # local sanitizers 00:19:47.720 06:58:50 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:47.720 06:58:50 -- common/autotest_common.sh@1330 -- # shift 00:19:47.720 06:58:50 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:19:47.720 06:58:50 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:47.720 06:58:50 -- nvmf/common.sh@542 -- # cat 00:19:47.720 06:58:50 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:47.720 06:58:50 -- common/autotest_common.sh@1334 -- # grep libasan 00:19:47.720 06:58:50 -- target/dif.sh@72 -- # (( file = 1 )) 00:19:47.720 06:58:50 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:47.720 06:58:50 -- target/dif.sh@72 -- # (( file <= files )) 00:19:47.720 06:58:50 -- nvmf/common.sh@544 -- # jq . 00:19:47.720 06:58:50 -- nvmf/common.sh@545 -- # IFS=, 00:19:47.720 06:58:50 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:47.720 "params": { 00:19:47.720 "name": "Nvme0", 00:19:47.720 "trtype": "tcp", 00:19:47.720 "traddr": "10.0.0.2", 00:19:47.720 "adrfam": "ipv4", 00:19:47.720 "trsvcid": "4420", 00:19:47.720 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:47.720 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:47.720 "hdgst": false, 00:19:47.720 "ddgst": false 00:19:47.720 }, 00:19:47.720 "method": "bdev_nvme_attach_controller" 00:19:47.720 }' 00:19:47.720 06:58:50 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:47.720 06:58:50 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:47.720 06:58:50 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:47.720 06:58:50 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:47.720 06:58:50 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:19:47.720 06:58:50 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:47.720 06:58:50 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:47.720 06:58:50 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:47.720 06:58:50 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:47.720 06:58:50 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:47.720 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:19:47.720 ... 00:19:47.720 fio-3.35 00:19:47.720 Starting 3 threads 00:19:47.720 [2024-12-13 06:58:50.898851] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:47.720 [2024-12-13 06:58:50.898936] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:51.906 00:19:51.906 filename0: (groupid=0, jobs=1): err= 0: pid=86655: Fri Dec 13 06:58:56 2024 00:19:51.906 read: IOPS=273, BW=34.2MiB/s (35.9MB/s)(171MiB/5004msec) 00:19:51.906 slat (nsec): min=6895, max=56139, avg=15200.04, stdev=4885.95 00:19:51.906 clat (usec): min=4018, max=12915, avg=10914.47, stdev=594.52 00:19:51.906 lat (usec): min=4025, max=12940, avg=10929.67, stdev=594.96 00:19:51.906 clat percentiles (usec): 00:19:51.906 | 1.00th=[10290], 5.00th=[10421], 10.00th=[10421], 20.00th=[10552], 00:19:51.906 | 30.00th=[10552], 40.00th=[10683], 50.00th=[10814], 60.00th=[10945], 00:19:51.906 | 70.00th=[11076], 80.00th=[11338], 90.00th=[11731], 95.00th=[11863], 00:19:51.906 | 99.00th=[11994], 99.50th=[11994], 99.90th=[12911], 99.95th=[12911], 00:19:51.906 | 99.99th=[12911] 00:19:51.906 bw ( KiB/s): min=33792, max=36096, per=33.30%, avg=34986.67, stdev=677.31, samples=9 00:19:51.906 iops : min= 264, max= 282, avg=273.33, stdev= 5.29, samples=9 00:19:51.906 lat (msec) : 10=0.44%, 20=99.56% 00:19:51.906 cpu : usr=91.47%, sys=7.94%, ctx=12, majf=0, minf=9 00:19:51.906 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:51.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.906 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.906 issued rwts: total=1371,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.906 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:51.906 filename0: (groupid=0, jobs=1): err= 0: pid=86656: Fri Dec 13 06:58:56 2024 00:19:51.906 read: IOPS=273, BW=34.2MiB/s (35.9MB/s)(171MiB/5001msec) 00:19:51.906 slat (nsec): min=7139, max=55199, avg=15476.62, stdev=4826.49 00:19:51.906 clat (usec): min=10195, max=12058, avg=10931.95, stdev=477.29 00:19:51.906 lat (usec): min=10208, max=12075, avg=10947.43, stdev=477.57 00:19:51.906 clat percentiles (usec): 00:19:51.906 | 1.00th=[10290], 5.00th=[10421], 10.00th=[10421], 20.00th=[10552], 00:19:51.906 | 30.00th=[10552], 40.00th=[10683], 50.00th=[10814], 60.00th=[10945], 00:19:51.906 | 70.00th=[11076], 80.00th=[11338], 90.00th=[11731], 95.00th=[11863], 00:19:51.906 | 99.00th=[11994], 99.50th=[11994], 99.90th=[11994], 99.95th=[12125], 00:19:51.906 | 99.99th=[12125] 00:19:51.906 bw ( KiB/s): min=34560, max=36096, per=33.31%, avg=34994.33, stdev=551.78, samples=9 00:19:51.906 iops : min= 270, max= 282, avg=273.33, stdev= 4.36, samples=9 00:19:51.906 lat (msec) : 20=100.00% 00:19:51.906 cpu : usr=90.76%, sys=8.56%, ctx=9, majf=0, minf=9 00:19:51.906 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:51.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.906 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.906 issued rwts: total=1368,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.906 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:51.906 filename0: (groupid=0, jobs=1): err= 0: pid=86657: Fri Dec 13 06:58:56 2024 00:19:51.906 read: IOPS=273, BW=34.2MiB/s (35.8MB/s)(171MiB/5002msec) 00:19:51.906 slat (nsec): min=6803, max=57216, avg=14243.32, stdev=5426.98 00:19:51.906 clat (usec): min=10187, max=12539, avg=10937.97, stdev=480.80 00:19:51.906 lat (usec): min=10200, max=12571, avg=10952.21, stdev=481.41 00:19:51.906 clat percentiles (usec): 00:19:51.906 | 1.00th=[10290], 5.00th=[10421], 10.00th=[10421], 20.00th=[10552], 00:19:51.906 | 30.00th=[10552], 40.00th=[10683], 50.00th=[10814], 60.00th=[10945], 00:19:51.906 | 70.00th=[11076], 80.00th=[11338], 90.00th=[11731], 95.00th=[11863], 00:19:51.906 | 99.00th=[11994], 99.50th=[11994], 99.90th=[12518], 99.95th=[12518], 00:19:51.907 | 99.99th=[12518] 00:19:51.907 bw ( KiB/s): min=34560, max=36096, per=33.30%, avg=34986.67, stdev=557.94, samples=9 00:19:51.907 iops : min= 270, max= 282, avg=273.33, stdev= 4.36, samples=9 00:19:51.907 lat (msec) : 20=100.00% 00:19:51.907 cpu : usr=90.88%, sys=8.30%, ctx=58, majf=0, minf=0 00:19:51.907 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:51.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.907 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.907 issued rwts: total=1368,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.907 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:51.907 00:19:51.907 Run status group 0 (all jobs): 00:19:51.907 READ: bw=103MiB/s (108MB/s), 34.2MiB/s-34.2MiB/s (35.8MB/s-35.9MB/s), io=513MiB (538MB), run=5001-5004msec 00:19:51.907 06:58:56 -- target/dif.sh@107 -- # destroy_subsystems 0 00:19:51.907 06:58:56 -- target/dif.sh@43 -- # local sub 00:19:51.907 06:58:56 -- target/dif.sh@45 -- # for sub in "$@" 00:19:51.907 06:58:56 -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:51.907 06:58:56 -- target/dif.sh@36 -- # local sub_id=0 00:19:51.907 06:58:56 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:51.907 06:58:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.907 06:58:56 -- common/autotest_common.sh@10 -- # set +x 00:19:51.907 06:58:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.907 06:58:56 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:51.907 06:58:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.907 06:58:56 -- common/autotest_common.sh@10 -- # set +x 00:19:51.907 06:58:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.907 06:58:56 -- target/dif.sh@109 -- # NULL_DIF=2 00:19:51.907 06:58:56 -- target/dif.sh@109 -- # bs=4k 00:19:51.907 06:58:56 -- target/dif.sh@109 -- # numjobs=8 00:19:51.907 06:58:56 -- target/dif.sh@109 -- # iodepth=16 00:19:51.907 06:58:56 -- target/dif.sh@109 -- # runtime= 00:19:51.907 06:58:56 -- target/dif.sh@109 -- # files=2 00:19:51.907 06:58:56 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:19:51.907 06:58:56 -- target/dif.sh@28 -- # local sub 00:19:51.907 06:58:56 -- target/dif.sh@30 -- # for sub in "$@" 00:19:51.907 06:58:56 -- target/dif.sh@31 -- # create_subsystem 0 00:19:51.907 06:58:56 -- target/dif.sh@18 -- # local sub_id=0 00:19:51.907 06:58:56 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:19:51.907 06:58:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.907 06:58:56 -- common/autotest_common.sh@10 -- # set +x 00:19:51.907 bdev_null0 00:19:51.907 06:58:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.907 06:58:56 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:51.907 06:58:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.907 06:58:56 -- common/autotest_common.sh@10 -- # set +x 00:19:51.907 06:58:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.907 06:58:56 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:51.907 06:58:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.907 06:58:56 -- common/autotest_common.sh@10 -- # set +x 00:19:51.907 06:58:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.907 06:58:56 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:51.907 06:58:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.907 06:58:56 -- common/autotest_common.sh@10 -- # set +x 00:19:51.907 [2024-12-13 06:58:56.218663] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:51.907 06:58:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.907 06:58:56 -- target/dif.sh@30 -- # for sub in "$@" 00:19:51.907 06:58:56 -- target/dif.sh@31 -- # create_subsystem 1 00:19:51.907 06:58:56 -- target/dif.sh@18 -- # local sub_id=1 00:19:51.907 06:58:56 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:19:51.907 06:58:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.907 06:58:56 -- common/autotest_common.sh@10 -- # set +x 00:19:51.907 bdev_null1 00:19:51.907 06:58:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.907 06:58:56 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:19:51.907 06:58:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.907 06:58:56 -- common/autotest_common.sh@10 -- # set +x 00:19:51.907 06:58:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.907 06:58:56 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:19:51.907 06:58:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.907 06:58:56 -- common/autotest_common.sh@10 -- # set +x 00:19:51.907 06:58:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.907 06:58:56 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:51.907 06:58:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.907 06:58:56 -- common/autotest_common.sh@10 -- # set +x 00:19:51.907 06:58:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.907 06:58:56 -- target/dif.sh@30 -- # for sub in "$@" 00:19:51.907 06:58:56 -- target/dif.sh@31 -- # create_subsystem 2 00:19:51.907 06:58:56 -- target/dif.sh@18 -- # local sub_id=2 00:19:51.907 06:58:56 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:19:51.907 06:58:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.907 06:58:56 -- common/autotest_common.sh@10 -- # set +x 00:19:51.907 bdev_null2 00:19:51.907 06:58:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.907 06:58:56 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:19:51.907 06:58:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.907 06:58:56 -- common/autotest_common.sh@10 -- # set +x 00:19:51.907 06:58:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.907 06:58:56 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:19:51.907 06:58:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.907 06:58:56 -- common/autotest_common.sh@10 -- # set +x 00:19:51.907 06:58:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.907 06:58:56 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:19:51.907 06:58:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.907 06:58:56 -- common/autotest_common.sh@10 -- # set +x 00:19:51.907 06:58:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.907 06:58:56 -- target/dif.sh@112 -- # fio /dev/fd/62 00:19:51.907 06:58:56 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:19:51.907 06:58:56 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:19:51.907 06:58:56 -- nvmf/common.sh@520 -- # config=() 00:19:51.907 06:58:56 -- nvmf/common.sh@520 -- # local subsystem config 00:19:51.907 06:58:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:51.907 06:58:56 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:51.907 06:58:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:51.907 { 00:19:51.907 "params": { 00:19:51.907 "name": "Nvme$subsystem", 00:19:51.907 "trtype": "$TEST_TRANSPORT", 00:19:51.907 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:51.907 "adrfam": "ipv4", 00:19:51.907 "trsvcid": "$NVMF_PORT", 00:19:51.907 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:51.907 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:51.907 "hdgst": ${hdgst:-false}, 00:19:51.907 "ddgst": ${ddgst:-false} 00:19:51.907 }, 00:19:51.907 "method": "bdev_nvme_attach_controller" 00:19:51.907 } 00:19:51.907 EOF 00:19:51.907 )") 00:19:51.907 06:58:56 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:51.907 06:58:56 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:19:51.907 06:58:56 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:51.907 06:58:56 -- common/autotest_common.sh@1328 -- # local sanitizers 00:19:51.907 06:58:56 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:51.907 06:58:56 -- common/autotest_common.sh@1330 -- # shift 00:19:51.907 06:58:56 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:19:51.907 06:58:56 -- nvmf/common.sh@542 -- # cat 00:19:51.907 06:58:56 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:51.907 06:58:56 -- target/dif.sh@82 -- # gen_fio_conf 00:19:51.907 06:58:56 -- target/dif.sh@54 -- # local file 00:19:51.907 06:58:56 -- target/dif.sh@56 -- # cat 00:19:51.907 06:58:56 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:51.907 06:58:56 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:51.907 06:58:56 -- common/autotest_common.sh@1334 -- # grep libasan 00:19:51.907 06:58:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:51.907 06:58:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:51.907 { 00:19:51.907 "params": { 00:19:51.907 "name": "Nvme$subsystem", 00:19:51.907 "trtype": "$TEST_TRANSPORT", 00:19:51.907 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:51.907 "adrfam": "ipv4", 00:19:51.907 "trsvcid": "$NVMF_PORT", 00:19:51.907 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:51.907 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:51.907 "hdgst": ${hdgst:-false}, 00:19:51.907 "ddgst": ${ddgst:-false} 00:19:51.907 }, 00:19:51.907 "method": "bdev_nvme_attach_controller" 00:19:51.907 } 00:19:51.907 EOF 00:19:51.907 )") 00:19:51.907 06:58:56 -- nvmf/common.sh@542 -- # cat 00:19:51.907 06:58:56 -- target/dif.sh@72 -- # (( file = 1 )) 00:19:51.907 06:58:56 -- target/dif.sh@72 -- # (( file <= files )) 00:19:51.907 06:58:56 -- target/dif.sh@73 -- # cat 00:19:51.908 06:58:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:51.908 06:58:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:51.908 { 00:19:51.908 "params": { 00:19:51.908 "name": "Nvme$subsystem", 00:19:51.908 "trtype": "$TEST_TRANSPORT", 00:19:51.908 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:51.908 "adrfam": "ipv4", 00:19:51.908 "trsvcid": "$NVMF_PORT", 00:19:51.908 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:51.908 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:51.908 "hdgst": ${hdgst:-false}, 00:19:51.908 "ddgst": ${ddgst:-false} 00:19:51.908 }, 00:19:51.908 "method": "bdev_nvme_attach_controller" 00:19:51.908 } 00:19:51.908 EOF 00:19:51.908 )") 00:19:51.908 06:58:56 -- nvmf/common.sh@542 -- # cat 00:19:51.908 06:58:56 -- target/dif.sh@72 -- # (( file++ )) 00:19:51.908 06:58:56 -- target/dif.sh@72 -- # (( file <= files )) 00:19:51.908 06:58:56 -- target/dif.sh@73 -- # cat 00:19:51.908 06:58:56 -- nvmf/common.sh@544 -- # jq . 00:19:51.908 06:58:56 -- nvmf/common.sh@545 -- # IFS=, 00:19:51.908 06:58:56 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:51.908 "params": { 00:19:51.908 "name": "Nvme0", 00:19:51.908 "trtype": "tcp", 00:19:51.908 "traddr": "10.0.0.2", 00:19:51.908 "adrfam": "ipv4", 00:19:51.908 "trsvcid": "4420", 00:19:51.908 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:51.908 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:51.908 "hdgst": false, 00:19:51.908 "ddgst": false 00:19:51.908 }, 00:19:51.908 "method": "bdev_nvme_attach_controller" 00:19:51.908 },{ 00:19:51.908 "params": { 00:19:51.908 "name": "Nvme1", 00:19:51.908 "trtype": "tcp", 00:19:51.908 "traddr": "10.0.0.2", 00:19:51.908 "adrfam": "ipv4", 00:19:51.908 "trsvcid": "4420", 00:19:51.908 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:51.908 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:51.908 "hdgst": false, 00:19:51.908 "ddgst": false 00:19:51.908 }, 00:19:51.908 "method": "bdev_nvme_attach_controller" 00:19:51.908 },{ 00:19:51.908 "params": { 00:19:51.908 "name": "Nvme2", 00:19:51.908 "trtype": "tcp", 00:19:51.908 "traddr": "10.0.0.2", 00:19:51.908 "adrfam": "ipv4", 00:19:51.908 "trsvcid": "4420", 00:19:51.908 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:51.908 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:51.908 "hdgst": false, 00:19:51.908 "ddgst": false 00:19:51.908 }, 00:19:51.908 "method": "bdev_nvme_attach_controller" 00:19:51.908 }' 00:19:51.908 06:58:56 -- target/dif.sh@72 -- # (( file++ )) 00:19:51.908 06:58:56 -- target/dif.sh@72 -- # (( file <= files )) 00:19:51.908 06:58:56 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:51.908 06:58:56 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:51.908 06:58:56 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:51.908 06:58:56 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:51.908 06:58:56 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:19:51.908 06:58:56 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:51.908 06:58:56 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:51.908 06:58:56 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:51.908 06:58:56 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:51.908 06:58:56 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:52.166 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:19:52.166 ... 00:19:52.166 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:19:52.166 ... 00:19:52.166 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:19:52.166 ... 00:19:52.166 fio-3.35 00:19:52.166 Starting 24 threads 00:19:52.731 [2024-12-13 06:58:56.943724] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:52.731 [2024-12-13 06:58:56.943815] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:20:02.694 00:20:02.694 filename0: (groupid=0, jobs=1): err= 0: pid=86753: Fri Dec 13 06:59:07 2024 00:20:02.694 read: IOPS=193, BW=773KiB/s (791kB/s)(7744KiB/10019msec) 00:20:02.694 slat (usec): min=4, max=8031, avg=34.93, stdev=363.85 00:20:02.694 clat (msec): min=27, max=157, avg=82.61, stdev=24.84 00:20:02.694 lat (msec): min=27, max=157, avg=82.64, stdev=24.83 00:20:02.694 clat percentiles (msec): 00:20:02.694 | 1.00th=[ 37], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 61], 00:20:02.694 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 83], 60.00th=[ 94], 00:20:02.694 | 70.00th=[ 100], 80.00th=[ 106], 90.00th=[ 114], 95.00th=[ 122], 00:20:02.694 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 157], 99.95th=[ 159], 00:20:02.694 | 99.99th=[ 159] 00:20:02.694 bw ( KiB/s): min= 512, max= 1104, per=3.83%, avg=768.00, stdev=196.90, samples=20 00:20:02.694 iops : min= 128, max= 276, avg=192.00, stdev=49.23, samples=20 00:20:02.694 lat (msec) : 50=12.71%, 100=59.25%, 250=28.05% 00:20:02.694 cpu : usr=40.26%, sys=2.35%, ctx=1227, majf=0, minf=9 00:20:02.694 IO depths : 1=0.1%, 2=3.7%, 4=14.6%, 8=67.7%, 16=14.0%, 32=0.0%, >=64=0.0% 00:20:02.694 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.694 complete : 0=0.0%, 4=91.3%, 8=5.5%, 16=3.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.694 issued rwts: total=1936,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:02.694 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:02.694 filename0: (groupid=0, jobs=1): err= 0: pid=86754: Fri Dec 13 06:59:07 2024 00:20:02.694 read: IOPS=211, BW=847KiB/s (868kB/s)(8492KiB/10022msec) 00:20:02.694 slat (usec): min=3, max=8034, avg=22.50, stdev=245.94 00:20:02.694 clat (msec): min=28, max=123, avg=75.43, stdev=19.87 00:20:02.694 lat (msec): min=28, max=123, avg=75.45, stdev=19.87 00:20:02.694 clat percentiles (msec): 00:20:02.694 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 61], 00:20:02.694 | 30.00th=[ 62], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 75], 00:20:02.694 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 107], 95.00th=[ 108], 00:20:02.694 | 99.00th=[ 110], 99.50th=[ 121], 99.90th=[ 121], 99.95th=[ 124], 00:20:02.694 | 99.99th=[ 124] 00:20:02.694 bw ( KiB/s): min= 688, max= 1024, per=4.20%, avg=842.60, stdev=130.71, samples=20 00:20:02.694 iops : min= 172, max= 256, avg=210.65, stdev=32.68, samples=20 00:20:02.694 lat (msec) : 50=14.74%, 100=72.30%, 250=12.95% 00:20:02.694 cpu : usr=31.22%, sys=1.61%, ctx=885, majf=0, minf=9 00:20:02.694 IO depths : 1=0.1%, 2=0.3%, 4=1.2%, 8=82.1%, 16=16.3%, 32=0.0%, >=64=0.0% 00:20:02.694 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.694 complete : 0=0.0%, 4=87.6%, 8=12.1%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.694 issued rwts: total=2123,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:02.694 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:02.694 filename0: (groupid=0, jobs=1): err= 0: pid=86755: Fri Dec 13 06:59:07 2024 00:20:02.694 read: IOPS=223, BW=892KiB/s (914kB/s)(8924KiB/10003msec) 00:20:02.694 slat (usec): min=4, max=8027, avg=21.85, stdev=239.85 00:20:02.694 clat (usec): min=1461, max=131121, avg=71630.51, stdev=21762.49 00:20:02.694 lat (usec): min=1467, max=131134, avg=71652.36, stdev=21761.87 00:20:02.694 clat percentiles (msec): 00:20:02.694 | 1.00th=[ 3], 5.00th=[ 43], 10.00th=[ 48], 20.00th=[ 51], 00:20:02.694 | 30.00th=[ 61], 40.00th=[ 64], 50.00th=[ 72], 60.00th=[ 73], 00:20:02.694 | 70.00th=[ 84], 80.00th=[ 95], 90.00th=[ 105], 95.00th=[ 108], 00:20:02.694 | 99.00th=[ 117], 99.50th=[ 120], 99.90th=[ 129], 99.95th=[ 132], 00:20:02.694 | 99.99th=[ 132] 00:20:02.694 bw ( KiB/s): min= 696, max= 1096, per=4.33%, avg=867.89, stdev=138.68, samples=19 00:20:02.694 iops : min= 174, max= 274, avg=216.95, stdev=34.70, samples=19 00:20:02.694 lat (msec) : 2=0.54%, 4=0.72%, 50=18.96%, 100=69.25%, 250=10.53% 00:20:02.694 cpu : usr=31.88%, sys=1.77%, ctx=955, majf=0, minf=9 00:20:02.694 IO depths : 1=0.1%, 2=0.4%, 4=1.8%, 8=82.2%, 16=15.6%, 32=0.0%, >=64=0.0% 00:20:02.694 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.694 complete : 0=0.0%, 4=87.2%, 8=12.3%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.694 issued rwts: total=2231,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:02.694 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:02.694 filename0: (groupid=0, jobs=1): err= 0: pid=86756: Fri Dec 13 06:59:07 2024 00:20:02.694 read: IOPS=209, BW=837KiB/s (858kB/s)(8412KiB/10045msec) 00:20:02.694 slat (usec): min=8, max=8022, avg=18.80, stdev=186.88 00:20:02.694 clat (msec): min=22, max=132, avg=76.21, stdev=20.30 00:20:02.694 lat (msec): min=23, max=132, avg=76.23, stdev=20.30 00:20:02.694 clat percentiles (msec): 00:20:02.694 | 1.00th=[ 36], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 61], 00:20:02.694 | 30.00th=[ 67], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 81], 00:20:02.694 | 70.00th=[ 93], 80.00th=[ 96], 90.00th=[ 106], 95.00th=[ 108], 00:20:02.694 | 99.00th=[ 112], 99.50th=[ 117], 99.90th=[ 131], 99.95th=[ 132], 00:20:02.695 | 99.99th=[ 132] 00:20:02.695 bw ( KiB/s): min= 640, max= 1168, per=4.18%, avg=837.60, stdev=148.77, samples=20 00:20:02.695 iops : min= 160, max= 292, avg=209.40, stdev=37.19, samples=20 00:20:02.695 lat (msec) : 50=13.36%, 100=72.33%, 250=14.31% 00:20:02.695 cpu : usr=31.46%, sys=1.69%, ctx=933, majf=0, minf=9 00:20:02.695 IO depths : 1=0.1%, 2=0.3%, 4=1.2%, 8=81.6%, 16=16.7%, 32=0.0%, >=64=0.0% 00:20:02.695 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.695 complete : 0=0.0%, 4=88.0%, 8=11.7%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.695 issued rwts: total=2103,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:02.695 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:02.695 filename0: (groupid=0, jobs=1): err= 0: pid=86757: Fri Dec 13 06:59:07 2024 00:20:02.695 read: IOPS=205, BW=823KiB/s (843kB/s)(8256KiB/10027msec) 00:20:02.695 slat (usec): min=3, max=8025, avg=25.60, stdev=249.38 00:20:02.695 clat (msec): min=32, max=146, avg=77.56, stdev=20.68 00:20:02.695 lat (msec): min=32, max=146, avg=77.59, stdev=20.67 00:20:02.695 clat percentiles (msec): 00:20:02.695 | 1.00th=[ 40], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 60], 00:20:02.695 | 30.00th=[ 65], 40.00th=[ 71], 50.00th=[ 73], 60.00th=[ 84], 00:20:02.695 | 70.00th=[ 96], 80.00th=[ 96], 90.00th=[ 107], 95.00th=[ 109], 00:20:02.695 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 142], 00:20:02.695 | 99.99th=[ 146] 00:20:02.695 bw ( KiB/s): min= 656, max= 1040, per=4.10%, avg=821.65, stdev=151.75, samples=20 00:20:02.695 iops : min= 164, max= 260, avg=205.40, stdev=37.95, samples=20 00:20:02.695 lat (msec) : 50=13.03%, 100=71.08%, 250=15.89% 00:20:02.695 cpu : usr=34.73%, sys=1.81%, ctx=1188, majf=0, minf=9 00:20:02.695 IO depths : 1=0.1%, 2=0.2%, 4=0.9%, 8=81.9%, 16=17.0%, 32=0.0%, >=64=0.0% 00:20:02.695 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.695 complete : 0=0.0%, 4=88.0%, 8=11.8%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.695 issued rwts: total=2064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:02.695 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:02.695 filename0: (groupid=0, jobs=1): err= 0: pid=86758: Fri Dec 13 06:59:07 2024 00:20:02.695 read: IOPS=215, BW=863KiB/s (883kB/s)(8632KiB/10005msec) 00:20:02.695 slat (usec): min=4, max=8027, avg=22.41, stdev=211.31 00:20:02.695 clat (msec): min=25, max=145, avg=74.08, stdev=22.82 00:20:02.695 lat (msec): min=25, max=145, avg=74.10, stdev=22.83 00:20:02.695 clat percentiles (msec): 00:20:02.695 | 1.00th=[ 36], 5.00th=[ 42], 10.00th=[ 47], 20.00th=[ 52], 00:20:02.695 | 30.00th=[ 59], 40.00th=[ 66], 50.00th=[ 72], 60.00th=[ 77], 00:20:02.695 | 70.00th=[ 88], 80.00th=[ 97], 90.00th=[ 106], 95.00th=[ 108], 00:20:02.695 | 99.00th=[ 130], 99.50th=[ 133], 99.90th=[ 142], 99.95th=[ 146], 00:20:02.695 | 99.99th=[ 146] 00:20:02.695 bw ( KiB/s): min= 632, max= 1120, per=4.23%, avg=847.84, stdev=172.32, samples=19 00:20:02.695 iops : min= 158, max= 280, avg=211.95, stdev=43.10, samples=19 00:20:02.695 lat (msec) : 50=18.72%, 100=65.99%, 250=15.29% 00:20:02.695 cpu : usr=40.47%, sys=2.09%, ctx=1231, majf=0, minf=9 00:20:02.695 IO depths : 1=0.1%, 2=1.2%, 4=4.6%, 8=79.1%, 16=15.1%, 32=0.0%, >=64=0.0% 00:20:02.695 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.695 complete : 0=0.0%, 4=88.0%, 8=11.0%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.695 issued rwts: total=2158,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:02.695 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:02.695 filename0: (groupid=0, jobs=1): err= 0: pid=86759: Fri Dec 13 06:59:07 2024 00:20:02.695 read: IOPS=197, BW=789KiB/s (808kB/s)(7900KiB/10009msec) 00:20:02.695 slat (usec): min=3, max=7513, avg=22.25, stdev=224.31 00:20:02.695 clat (msec): min=25, max=158, avg=80.94, stdev=26.35 00:20:02.695 lat (msec): min=25, max=158, avg=80.96, stdev=26.35 00:20:02.695 clat percentiles (msec): 00:20:02.695 | 1.00th=[ 37], 5.00th=[ 44], 10.00th=[ 47], 20.00th=[ 54], 00:20:02.695 | 30.00th=[ 64], 40.00th=[ 71], 50.00th=[ 78], 60.00th=[ 92], 00:20:02.695 | 70.00th=[ 102], 80.00th=[ 108], 90.00th=[ 112], 95.00th=[ 122], 00:20:02.695 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 159], 99.95th=[ 159], 00:20:02.695 | 99.99th=[ 159] 00:20:02.695 bw ( KiB/s): min= 512, max= 1120, per=3.91%, avg=783.65, stdev=213.34, samples=20 00:20:02.695 iops : min= 128, max= 280, avg=195.90, stdev=53.33, samples=20 00:20:02.695 lat (msec) : 50=16.71%, 100=52.96%, 250=30.33% 00:20:02.695 cpu : usr=44.34%, sys=2.18%, ctx=1224, majf=0, minf=9 00:20:02.695 IO depths : 1=0.1%, 2=3.2%, 4=12.9%, 8=69.7%, 16=14.1%, 32=0.0%, >=64=0.0% 00:20:02.695 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.695 complete : 0=0.0%, 4=90.6%, 8=6.5%, 16=2.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.695 issued rwts: total=1975,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:02.695 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:02.695 filename0: (groupid=0, jobs=1): err= 0: pid=86760: Fri Dec 13 06:59:07 2024 00:20:02.695 read: IOPS=210, BW=844KiB/s (864kB/s)(8448KiB/10013msec) 00:20:02.695 slat (usec): min=4, max=4024, avg=16.88, stdev=87.39 00:20:02.695 clat (msec): min=27, max=147, avg=75.77, stdev=23.63 00:20:02.695 lat (msec): min=27, max=147, avg=75.79, stdev=23.63 00:20:02.695 clat percentiles (msec): 00:20:02.695 | 1.00th=[ 37], 5.00th=[ 43], 10.00th=[ 47], 20.00th=[ 53], 00:20:02.695 | 30.00th=[ 61], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 79], 00:20:02.695 | 70.00th=[ 91], 80.00th=[ 97], 90.00th=[ 108], 95.00th=[ 114], 00:20:02.695 | 99.00th=[ 136], 99.50th=[ 142], 99.90th=[ 142], 99.95th=[ 148], 00:20:02.695 | 99.99th=[ 148] 00:20:02.695 bw ( KiB/s): min= 512, max= 1104, per=4.18%, avg=838.45, stdev=183.20, samples=20 00:20:02.695 iops : min= 128, max= 276, avg=209.60, stdev=45.80, samples=20 00:20:02.695 lat (msec) : 50=18.66%, 100=64.20%, 250=17.14% 00:20:02.695 cpu : usr=44.00%, sys=2.24%, ctx=1140, majf=0, minf=9 00:20:02.695 IO depths : 1=0.1%, 2=1.3%, 4=4.9%, 8=78.6%, 16=15.2%, 32=0.0%, >=64=0.0% 00:20:02.695 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.695 complete : 0=0.0%, 4=88.2%, 8=10.8%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.695 issued rwts: total=2112,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:02.695 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:02.695 filename1: (groupid=0, jobs=1): err= 0: pid=86761: Fri Dec 13 06:59:07 2024 00:20:02.695 read: IOPS=217, BW=872KiB/s (893kB/s)(8744KiB/10028msec) 00:20:02.695 slat (usec): min=7, max=8039, avg=32.45, stdev=313.26 00:20:02.695 clat (msec): min=28, max=121, avg=73.27, stdev=20.47 00:20:02.695 lat (msec): min=28, max=121, avg=73.30, stdev=20.47 00:20:02.695 clat percentiles (msec): 00:20:02.695 | 1.00th=[ 35], 5.00th=[ 43], 10.00th=[ 47], 20.00th=[ 54], 00:20:02.695 | 30.00th=[ 62], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 75], 00:20:02.695 | 70.00th=[ 84], 80.00th=[ 96], 90.00th=[ 105], 95.00th=[ 108], 00:20:02.695 | 99.00th=[ 114], 99.50th=[ 116], 99.90th=[ 118], 99.95th=[ 121], 00:20:02.695 | 99.99th=[ 123] 00:20:02.695 bw ( KiB/s): min= 688, max= 1072, per=4.33%, avg=868.05, stdev=140.97, samples=20 00:20:02.695 iops : min= 172, max= 268, avg=217.00, stdev=35.25, samples=20 00:20:02.695 lat (msec) : 50=15.32%, 100=72.10%, 250=12.58% 00:20:02.695 cpu : usr=43.25%, sys=2.37%, ctx=1440, majf=0, minf=9 00:20:02.695 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=82.9%, 16=16.1%, 32=0.0%, >=64=0.0% 00:20:02.695 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.695 complete : 0=0.0%, 4=87.2%, 8=12.6%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.695 issued rwts: total=2186,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:02.695 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:02.695 filename1: (groupid=0, jobs=1): err= 0: pid=86762: Fri Dec 13 06:59:07 2024 00:20:02.695 read: IOPS=216, BW=865KiB/s (886kB/s)(8688KiB/10044msec) 00:20:02.695 slat (usec): min=5, max=8368, avg=21.44, stdev=249.32 00:20:02.695 clat (usec): min=1902, max=139763, avg=73799.35, stdev=26503.91 00:20:02.695 lat (usec): min=1920, max=139776, avg=73820.79, stdev=26509.81 00:20:02.695 clat percentiles (msec): 00:20:02.695 | 1.00th=[ 3], 5.00th=[ 6], 10.00th=[ 46], 20.00th=[ 58], 00:20:02.695 | 30.00th=[ 64], 40.00th=[ 70], 50.00th=[ 73], 60.00th=[ 80], 00:20:02.695 | 70.00th=[ 91], 80.00th=[ 99], 90.00th=[ 107], 95.00th=[ 109], 00:20:02.695 | 99.00th=[ 129], 99.50th=[ 129], 99.90th=[ 140], 99.95th=[ 140], 00:20:02.695 | 99.99th=[ 140] 00:20:02.695 bw ( KiB/s): min= 640, max= 2048, per=4.31%, avg=864.40, stdev=306.49, samples=20 00:20:02.695 iops : min= 160, max= 512, avg=216.10, stdev=76.62, samples=20 00:20:02.695 lat (msec) : 2=0.32%, 4=2.12%, 10=2.72%, 20=1.38%, 50=7.18% 00:20:02.695 lat (msec) : 100=68.78%, 250=17.50% 00:20:02.695 cpu : usr=42.07%, sys=2.28%, ctx=1532, majf=0, minf=0 00:20:02.695 IO depths : 1=0.3%, 2=1.7%, 4=5.8%, 8=76.4%, 16=15.7%, 32=0.0%, >=64=0.0% 00:20:02.695 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.695 complete : 0=0.0%, 4=89.3%, 8=9.5%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.695 issued rwts: total=2172,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:02.695 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:02.695 filename1: (groupid=0, jobs=1): err= 0: pid=86763: Fri Dec 13 06:59:07 2024 00:20:02.695 read: IOPS=216, BW=868KiB/s (889kB/s)(8736KiB/10066msec) 00:20:02.695 slat (usec): min=3, max=8024, avg=22.88, stdev=225.29 00:20:02.695 clat (msec): min=2, max=136, avg=73.54, stdev=25.30 00:20:02.695 lat (msec): min=2, max=136, avg=73.56, stdev=25.31 00:20:02.695 clat percentiles (msec): 00:20:02.695 | 1.00th=[ 4], 5.00th=[ 15], 10.00th=[ 46], 20.00th=[ 55], 00:20:02.695 | 30.00th=[ 63], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 80], 00:20:02.695 | 70.00th=[ 91], 80.00th=[ 97], 90.00th=[ 105], 95.00th=[ 108], 00:20:02.695 | 99.00th=[ 116], 99.50th=[ 132], 99.90th=[ 132], 99.95th=[ 136], 00:20:02.695 | 99.99th=[ 136] 00:20:02.695 bw ( KiB/s): min= 640, max= 1792, per=4.32%, avg=866.95, stdev=258.75, samples=20 00:20:02.695 iops : min= 160, max= 448, avg=216.70, stdev=64.72, samples=20 00:20:02.695 lat (msec) : 4=1.47%, 10=2.20%, 20=1.37%, 50=11.40%, 100=68.13% 00:20:02.695 lat (msec) : 250=15.43% 00:20:02.695 cpu : usr=41.11%, sys=1.98%, ctx=1215, majf=0, minf=9 00:20:02.696 IO depths : 1=0.3%, 2=1.3%, 4=4.3%, 8=78.3%, 16=15.8%, 32=0.0%, >=64=0.0% 00:20:02.696 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.696 complete : 0=0.0%, 4=88.7%, 8=10.4%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.696 issued rwts: total=2184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:02.696 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:02.696 filename1: (groupid=0, jobs=1): err= 0: pid=86764: Fri Dec 13 06:59:07 2024 00:20:02.696 read: IOPS=214, BW=859KiB/s (880kB/s)(8620KiB/10036msec) 00:20:02.696 slat (usec): min=4, max=8027, avg=25.07, stdev=298.76 00:20:02.696 clat (msec): min=8, max=143, avg=74.31, stdev=22.24 00:20:02.696 lat (msec): min=8, max=143, avg=74.33, stdev=22.24 00:20:02.696 clat percentiles (msec): 00:20:02.696 | 1.00th=[ 13], 5.00th=[ 39], 10.00th=[ 48], 20.00th=[ 59], 00:20:02.696 | 30.00th=[ 63], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 78], 00:20:02.696 | 70.00th=[ 87], 80.00th=[ 96], 90.00th=[ 106], 95.00th=[ 108], 00:20:02.696 | 99.00th=[ 115], 99.50th=[ 117], 99.90th=[ 122], 99.95th=[ 132], 00:20:02.696 | 99.99th=[ 144] 00:20:02.696 bw ( KiB/s): min= 656, max= 1496, per=4.28%, avg=858.30, stdev=190.41, samples=20 00:20:02.696 iops : min= 164, max= 374, avg=214.55, stdev=47.62, samples=20 00:20:02.696 lat (msec) : 10=0.74%, 20=2.04%, 50=11.83%, 100=71.51%, 250=13.87% 00:20:02.696 cpu : usr=32.47%, sys=1.56%, ctx=995, majf=0, minf=9 00:20:02.696 IO depths : 1=0.1%, 2=0.3%, 4=0.9%, 8=81.9%, 16=16.7%, 32=0.0%, >=64=0.0% 00:20:02.696 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.696 complete : 0=0.0%, 4=87.9%, 8=11.9%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.696 issued rwts: total=2155,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:02.696 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:02.696 filename1: (groupid=0, jobs=1): err= 0: pid=86765: Fri Dec 13 06:59:07 2024 00:20:02.696 read: IOPS=191, BW=767KiB/s (785kB/s)(7704KiB/10045msec) 00:20:02.696 slat (usec): min=7, max=4029, avg=16.29, stdev=91.62 00:20:02.696 clat (msec): min=13, max=155, avg=83.20, stdev=24.78 00:20:02.696 lat (msec): min=13, max=155, avg=83.22, stdev=24.78 00:20:02.696 clat percentiles (msec): 00:20:02.696 | 1.00th=[ 16], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 62], 00:20:02.696 | 30.00th=[ 69], 40.00th=[ 74], 50.00th=[ 86], 60.00th=[ 96], 00:20:02.696 | 70.00th=[ 100], 80.00th=[ 105], 90.00th=[ 112], 95.00th=[ 122], 00:20:02.696 | 99.00th=[ 132], 99.50th=[ 132], 99.90th=[ 157], 99.95th=[ 157], 00:20:02.696 | 99.99th=[ 157] 00:20:02.696 bw ( KiB/s): min= 512, max= 1152, per=3.82%, avg=766.40, stdev=200.41, samples=20 00:20:02.696 iops : min= 128, max= 288, avg=191.60, stdev=50.10, samples=20 00:20:02.696 lat (msec) : 20=1.56%, 50=10.33%, 100=59.66%, 250=28.45% 00:20:02.696 cpu : usr=43.32%, sys=2.30%, ctx=1249, majf=0, minf=9 00:20:02.696 IO depths : 1=0.1%, 2=3.6%, 4=14.8%, 8=67.2%, 16=14.3%, 32=0.0%, >=64=0.0% 00:20:02.696 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.696 complete : 0=0.0%, 4=91.6%, 8=5.1%, 16=3.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.696 issued rwts: total=1926,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:02.696 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:02.696 filename1: (groupid=0, jobs=1): err= 0: pid=86766: Fri Dec 13 06:59:07 2024 00:20:02.696 read: IOPS=206, BW=825KiB/s (844kB/s)(8284KiB/10045msec) 00:20:02.696 slat (usec): min=7, max=272, avg=14.00, stdev= 7.58 00:20:02.696 clat (msec): min=15, max=143, avg=77.42, stdev=22.46 00:20:02.696 lat (msec): min=15, max=143, avg=77.43, stdev=22.46 00:20:02.696 clat percentiles (msec): 00:20:02.696 | 1.00th=[ 17], 5.00th=[ 43], 10.00th=[ 48], 20.00th=[ 57], 00:20:02.696 | 30.00th=[ 66], 40.00th=[ 71], 50.00th=[ 74], 60.00th=[ 86], 00:20:02.696 | 70.00th=[ 96], 80.00th=[ 97], 90.00th=[ 108], 95.00th=[ 109], 00:20:02.696 | 99.00th=[ 120], 99.50th=[ 122], 99.90th=[ 132], 99.95th=[ 142], 00:20:02.696 | 99.99th=[ 144] 00:20:02.696 bw ( KiB/s): min= 608, max= 1368, per=4.11%, avg=824.80, stdev=203.86, samples=20 00:20:02.696 iops : min= 152, max= 342, avg=206.20, stdev=50.97, samples=20 00:20:02.696 lat (msec) : 20=1.45%, 50=12.12%, 100=69.82%, 250=16.61% 00:20:02.696 cpu : usr=35.68%, sys=1.81%, ctx=1329, majf=0, minf=0 00:20:02.696 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=81.7%, 16=17.3%, 32=0.0%, >=64=0.0% 00:20:02.696 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.696 complete : 0=0.0%, 4=88.2%, 8=11.6%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.696 issued rwts: total=2071,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:02.696 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:02.696 filename1: (groupid=0, jobs=1): err= 0: pid=86767: Fri Dec 13 06:59:07 2024 00:20:02.696 read: IOPS=212, BW=849KiB/s (869kB/s)(8516KiB/10032msec) 00:20:02.696 slat (usec): min=4, max=8029, avg=25.97, stdev=300.71 00:20:02.696 clat (msec): min=23, max=131, avg=75.20, stdev=20.22 00:20:02.696 lat (msec): min=23, max=131, avg=75.23, stdev=20.21 00:20:02.696 clat percentiles (msec): 00:20:02.696 | 1.00th=[ 37], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 59], 00:20:02.696 | 30.00th=[ 62], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 77], 00:20:02.696 | 70.00th=[ 90], 80.00th=[ 96], 90.00th=[ 106], 95.00th=[ 108], 00:20:02.696 | 99.00th=[ 120], 99.50th=[ 121], 99.90th=[ 131], 99.95th=[ 132], 00:20:02.696 | 99.99th=[ 132] 00:20:02.696 bw ( KiB/s): min= 664, max= 1072, per=4.23%, avg=847.30, stdev=146.46, samples=20 00:20:02.696 iops : min= 166, max= 268, avg=211.80, stdev=36.64, samples=20 00:20:02.696 lat (msec) : 50=15.03%, 100=73.32%, 250=11.65% 00:20:02.696 cpu : usr=32.29%, sys=1.70%, ctx=994, majf=0, minf=9 00:20:02.696 IO depths : 1=0.1%, 2=0.3%, 4=1.1%, 8=82.2%, 16=16.4%, 32=0.0%, >=64=0.0% 00:20:02.696 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.696 complete : 0=0.0%, 4=87.7%, 8=12.1%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.696 issued rwts: total=2129,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:02.696 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:02.696 filename1: (groupid=0, jobs=1): err= 0: pid=86768: Fri Dec 13 06:59:07 2024 00:20:02.696 read: IOPS=209, BW=839KiB/s (859kB/s)(8428KiB/10046msec) 00:20:02.696 slat (usec): min=5, max=4026, avg=20.34, stdev=146.87 00:20:02.696 clat (msec): min=13, max=127, avg=76.07, stdev=20.47 00:20:02.696 lat (msec): min=13, max=127, avg=76.09, stdev=20.48 00:20:02.696 clat percentiles (msec): 00:20:02.696 | 1.00th=[ 26], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 61], 00:20:02.696 | 30.00th=[ 67], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 80], 00:20:02.696 | 70.00th=[ 91], 80.00th=[ 97], 90.00th=[ 105], 95.00th=[ 108], 00:20:02.696 | 99.00th=[ 115], 99.50th=[ 123], 99.90th=[ 125], 99.95th=[ 128], 00:20:02.696 | 99.99th=[ 128] 00:20:02.696 bw ( KiB/s): min= 640, max= 1160, per=4.19%, avg=839.20, stdev=149.26, samples=20 00:20:02.696 iops : min= 160, max= 290, avg=209.80, stdev=37.31, samples=20 00:20:02.696 lat (msec) : 20=0.76%, 50=10.35%, 100=74.28%, 250=14.62% 00:20:02.696 cpu : usr=41.06%, sys=2.46%, ctx=1292, majf=0, minf=9 00:20:02.696 IO depths : 1=0.1%, 2=0.2%, 4=0.9%, 8=82.1%, 16=16.8%, 32=0.0%, >=64=0.0% 00:20:02.696 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.696 complete : 0=0.0%, 4=87.9%, 8=11.9%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.696 issued rwts: total=2107,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:02.696 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:02.696 filename2: (groupid=0, jobs=1): err= 0: pid=86769: Fri Dec 13 06:59:07 2024 00:20:02.696 read: IOPS=216, BW=868KiB/s (889kB/s)(8688KiB/10011msec) 00:20:02.696 slat (usec): min=3, max=8032, avg=33.24, stdev=384.09 00:20:02.696 clat (msec): min=18, max=156, avg=73.61, stdev=21.50 00:20:02.696 lat (msec): min=18, max=156, avg=73.65, stdev=21.48 00:20:02.696 clat percentiles (msec): 00:20:02.696 | 1.00th=[ 36], 5.00th=[ 43], 10.00th=[ 48], 20.00th=[ 51], 00:20:02.696 | 30.00th=[ 61], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 74], 00:20:02.696 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 107], 95.00th=[ 108], 00:20:02.696 | 99.00th=[ 117], 99.50th=[ 142], 99.90th=[ 142], 99.95th=[ 157], 00:20:02.696 | 99.99th=[ 157] 00:20:02.696 bw ( KiB/s): min= 688, max= 1072, per=4.30%, avg=862.40, stdev=132.36, samples=20 00:20:02.696 iops : min= 172, max= 268, avg=215.60, stdev=33.09, samples=20 00:20:02.696 lat (msec) : 20=0.14%, 50=19.61%, 100=68.05%, 250=12.20% 00:20:02.696 cpu : usr=31.40%, sys=1.41%, ctx=883, majf=0, minf=9 00:20:02.696 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.2%, 16=16.1%, 32=0.0%, >=64=0.0% 00:20:02.696 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.696 complete : 0=0.0%, 4=87.1%, 8=12.8%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.696 issued rwts: total=2172,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:02.696 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:02.696 filename2: (groupid=0, jobs=1): err= 0: pid=86770: Fri Dec 13 06:59:07 2024 00:20:02.696 read: IOPS=206, BW=826KiB/s (846kB/s)(8300KiB/10043msec) 00:20:02.696 slat (usec): min=8, max=8027, avg=25.32, stdev=304.46 00:20:02.696 clat (msec): min=25, max=141, avg=77.19, stdev=21.11 00:20:02.696 lat (msec): min=25, max=141, avg=77.22, stdev=21.12 00:20:02.696 clat percentiles (msec): 00:20:02.696 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 61], 00:20:02.696 | 30.00th=[ 66], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 83], 00:20:02.696 | 70.00th=[ 94], 80.00th=[ 97], 90.00th=[ 107], 95.00th=[ 109], 00:20:02.696 | 99.00th=[ 129], 99.50th=[ 132], 99.90th=[ 140], 99.95th=[ 142], 00:20:02.696 | 99.99th=[ 142] 00:20:02.696 bw ( KiB/s): min= 568, max= 1040, per=4.12%, avg=826.00, stdev=158.47, samples=20 00:20:02.696 iops : min= 142, max= 260, avg=206.50, stdev=39.62, samples=20 00:20:02.696 lat (msec) : 50=12.72%, 100=72.43%, 250=14.84% 00:20:02.696 cpu : usr=31.42%, sys=1.72%, ctx=922, majf=0, minf=9 00:20:02.696 IO depths : 1=0.1%, 2=0.9%, 4=3.8%, 8=79.1%, 16=16.1%, 32=0.0%, >=64=0.0% 00:20:02.696 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.696 complete : 0=0.0%, 4=88.5%, 8=10.7%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.696 issued rwts: total=2075,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:02.696 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:02.696 filename2: (groupid=0, jobs=1): err= 0: pid=86771: Fri Dec 13 06:59:07 2024 00:20:02.696 read: IOPS=199, BW=798KiB/s (817kB/s)(8008KiB/10034msec) 00:20:02.696 slat (usec): min=3, max=8023, avg=27.98, stdev=259.90 00:20:02.697 clat (msec): min=25, max=159, avg=79.98, stdev=23.58 00:20:02.697 lat (msec): min=25, max=159, avg=80.01, stdev=23.58 00:20:02.697 clat percentiles (msec): 00:20:02.697 | 1.00th=[ 37], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 57], 00:20:02.697 | 30.00th=[ 65], 40.00th=[ 72], 50.00th=[ 77], 60.00th=[ 94], 00:20:02.697 | 70.00th=[ 97], 80.00th=[ 104], 90.00th=[ 108], 95.00th=[ 114], 00:20:02.697 | 99.00th=[ 132], 99.50th=[ 132], 99.90th=[ 146], 99.95th=[ 161], 00:20:02.697 | 99.99th=[ 161] 00:20:02.697 bw ( KiB/s): min= 512, max= 1138, per=3.96%, avg=794.50, stdev=206.68, samples=20 00:20:02.697 iops : min= 128, max= 284, avg=198.60, stdev=51.63, samples=20 00:20:02.697 lat (msec) : 50=12.74%, 100=62.64%, 250=24.63% 00:20:02.697 cpu : usr=40.79%, sys=2.16%, ctx=1179, majf=0, minf=9 00:20:02.697 IO depths : 1=0.1%, 2=3.2%, 4=12.9%, 8=69.4%, 16=14.4%, 32=0.0%, >=64=0.0% 00:20:02.697 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.697 complete : 0=0.0%, 4=90.9%, 8=6.3%, 16=2.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.697 issued rwts: total=2002,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:02.697 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:02.697 filename2: (groupid=0, jobs=1): err= 0: pid=86772: Fri Dec 13 06:59:07 2024 00:20:02.697 read: IOPS=210, BW=841KiB/s (861kB/s)(8440KiB/10035msec) 00:20:02.697 slat (usec): min=7, max=8034, avg=21.50, stdev=246.73 00:20:02.697 clat (msec): min=17, max=143, avg=75.92, stdev=20.67 00:20:02.697 lat (msec): min=17, max=143, avg=75.94, stdev=20.68 00:20:02.697 clat percentiles (msec): 00:20:02.697 | 1.00th=[ 25], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 59], 00:20:02.697 | 30.00th=[ 64], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 79], 00:20:02.697 | 70.00th=[ 91], 80.00th=[ 99], 90.00th=[ 105], 95.00th=[ 109], 00:20:02.697 | 99.00th=[ 113], 99.50th=[ 114], 99.90th=[ 130], 99.95th=[ 132], 00:20:02.697 | 99.99th=[ 144] 00:20:02.697 bw ( KiB/s): min= 640, max= 1144, per=4.19%, avg=840.00, stdev=159.92, samples=20 00:20:02.697 iops : min= 160, max= 286, avg=210.00, stdev=39.98, samples=20 00:20:02.697 lat (msec) : 20=0.66%, 50=11.37%, 100=72.32%, 250=15.64% 00:20:02.697 cpu : usr=40.93%, sys=2.37%, ctx=1238, majf=0, minf=9 00:20:02.697 IO depths : 1=0.1%, 2=0.3%, 4=1.3%, 8=81.7%, 16=16.7%, 32=0.0%, >=64=0.0% 00:20:02.697 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.697 complete : 0=0.0%, 4=88.0%, 8=11.7%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.697 issued rwts: total=2110,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:02.697 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:02.697 filename2: (groupid=0, jobs=1): err= 0: pid=86773: Fri Dec 13 06:59:07 2024 00:20:02.697 read: IOPS=192, BW=769KiB/s (787kB/s)(7724KiB/10045msec) 00:20:02.697 slat (usec): min=7, max=8024, avg=18.32, stdev=182.37 00:20:02.697 clat (msec): min=13, max=154, avg=83.01, stdev=26.30 00:20:02.697 lat (msec): min=13, max=154, avg=83.03, stdev=26.31 00:20:02.697 clat percentiles (msec): 00:20:02.697 | 1.00th=[ 16], 5.00th=[ 44], 10.00th=[ 49], 20.00th=[ 59], 00:20:02.697 | 30.00th=[ 69], 40.00th=[ 73], 50.00th=[ 84], 60.00th=[ 94], 00:20:02.697 | 70.00th=[ 100], 80.00th=[ 105], 90.00th=[ 114], 95.00th=[ 129], 00:20:02.697 | 99.00th=[ 146], 99.50th=[ 148], 99.90th=[ 155], 99.95th=[ 155], 00:20:02.697 | 99.99th=[ 155] 00:20:02.697 bw ( KiB/s): min= 512, max= 1264, per=3.82%, avg=766.00, stdev=218.02, samples=20 00:20:02.697 iops : min= 128, max= 316, avg=191.50, stdev=54.51, samples=20 00:20:02.697 lat (msec) : 20=1.55%, 50=10.05%, 100=61.32%, 250=27.08% 00:20:02.697 cpu : usr=40.23%, sys=1.95%, ctx=1221, majf=0, minf=9 00:20:02.697 IO depths : 1=0.1%, 2=3.6%, 4=14.4%, 8=67.6%, 16=14.3%, 32=0.0%, >=64=0.0% 00:20:02.697 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.697 complete : 0=0.0%, 4=91.4%, 8=5.4%, 16=3.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.697 issued rwts: total=1931,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:02.697 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:02.697 filename2: (groupid=0, jobs=1): err= 0: pid=86774: Fri Dec 13 06:59:07 2024 00:20:02.697 read: IOPS=216, BW=866KiB/s (887kB/s)(8676KiB/10021msec) 00:20:02.697 slat (usec): min=4, max=8033, avg=26.67, stdev=243.36 00:20:02.697 clat (msec): min=27, max=127, avg=73.80, stdev=20.36 00:20:02.697 lat (msec): min=27, max=127, avg=73.83, stdev=20.37 00:20:02.697 clat percentiles (msec): 00:20:02.697 | 1.00th=[ 37], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 54], 00:20:02.697 | 30.00th=[ 61], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 75], 00:20:02.697 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 105], 95.00th=[ 108], 00:20:02.697 | 99.00th=[ 113], 99.50th=[ 115], 99.90th=[ 121], 99.95th=[ 128], 00:20:02.697 | 99.99th=[ 128] 00:20:02.697 bw ( KiB/s): min= 648, max= 1096, per=4.30%, avg=861.10, stdev=138.63, samples=20 00:20:02.697 iops : min= 162, max= 274, avg=215.25, stdev=34.65, samples=20 00:20:02.697 lat (msec) : 50=16.92%, 100=69.29%, 250=13.79% 00:20:02.697 cpu : usr=38.97%, sys=2.03%, ctx=1146, majf=0, minf=9 00:20:02.697 IO depths : 1=0.1%, 2=0.5%, 4=1.8%, 8=81.9%, 16=15.8%, 32=0.0%, >=64=0.0% 00:20:02.697 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.697 complete : 0=0.0%, 4=87.4%, 8=12.2%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.697 issued rwts: total=2169,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:02.697 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:02.697 filename2: (groupid=0, jobs=1): err= 0: pid=86775: Fri Dec 13 06:59:07 2024 00:20:02.697 read: IOPS=218, BW=872KiB/s (893kB/s)(8736KiB/10014msec) 00:20:02.697 slat (usec): min=3, max=8029, avg=28.71, stdev=311.88 00:20:02.697 clat (msec): min=21, max=140, avg=73.25, stdev=20.59 00:20:02.697 lat (msec): min=21, max=140, avg=73.28, stdev=20.58 00:20:02.697 clat percentiles (msec): 00:20:02.697 | 1.00th=[ 38], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 52], 00:20:02.697 | 30.00th=[ 61], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 73], 00:20:02.697 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 106], 95.00th=[ 108], 00:20:02.697 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 130], 99.95th=[ 142], 00:20:02.697 | 99.99th=[ 142] 00:20:02.697 bw ( KiB/s): min= 688, max= 1072, per=4.33%, avg=867.20, stdev=143.05, samples=20 00:20:02.697 iops : min= 172, max= 268, avg=216.80, stdev=35.76, samples=20 00:20:02.697 lat (msec) : 50=18.41%, 100=70.47%, 250=11.13% 00:20:02.697 cpu : usr=31.20%, sys=1.62%, ctx=899, majf=0, minf=9 00:20:02.697 IO depths : 1=0.1%, 2=0.4%, 4=1.6%, 8=82.3%, 16=15.7%, 32=0.0%, >=64=0.0% 00:20:02.697 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.697 complete : 0=0.0%, 4=87.2%, 8=12.5%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.697 issued rwts: total=2184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:02.697 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:02.697 filename2: (groupid=0, jobs=1): err= 0: pid=86776: Fri Dec 13 06:59:07 2024 00:20:02.697 read: IOPS=215, BW=864KiB/s (885kB/s)(8660KiB/10025msec) 00:20:02.697 slat (usec): min=8, max=8025, avg=28.79, stdev=344.10 00:20:02.697 clat (msec): min=25, max=131, avg=73.98, stdev=20.67 00:20:02.697 lat (msec): min=25, max=131, avg=74.01, stdev=20.66 00:20:02.697 clat percentiles (msec): 00:20:02.697 | 1.00th=[ 36], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 56], 00:20:02.697 | 30.00th=[ 61], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 75], 00:20:02.697 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 106], 95.00th=[ 108], 00:20:02.697 | 99.00th=[ 113], 99.50th=[ 121], 99.90th=[ 121], 99.95th=[ 129], 00:20:02.697 | 99.99th=[ 132] 00:20:02.697 bw ( KiB/s): min= 664, max= 1048, per=4.29%, avg=859.60, stdev=135.83, samples=20 00:20:02.697 iops : min= 166, max= 262, avg=214.90, stdev=33.96, samples=20 00:20:02.697 lat (msec) : 50=16.63%, 100=69.65%, 250=13.72% 00:20:02.697 cpu : usr=31.52%, sys=1.65%, ctx=944, majf=0, minf=9 00:20:02.697 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=82.8%, 16=16.3%, 32=0.0%, >=64=0.0% 00:20:02.697 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.697 complete : 0=0.0%, 4=87.4%, 8=12.5%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.697 issued rwts: total=2165,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:02.697 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:02.697 00:20:02.697 Run status group 0 (all jobs): 00:20:02.697 READ: bw=19.6MiB/s (20.5MB/s), 767KiB/s-892KiB/s (785kB/s-914kB/s), io=197MiB (207MB), run=10003-10066msec 00:20:02.956 06:59:07 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:20:02.956 06:59:07 -- target/dif.sh@43 -- # local sub 00:20:02.956 06:59:07 -- target/dif.sh@45 -- # for sub in "$@" 00:20:02.956 06:59:07 -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:02.957 06:59:07 -- target/dif.sh@36 -- # local sub_id=0 00:20:02.957 06:59:07 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:02.957 06:59:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.957 06:59:07 -- common/autotest_common.sh@10 -- # set +x 00:20:02.957 06:59:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.957 06:59:07 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:02.957 06:59:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.957 06:59:07 -- common/autotest_common.sh@10 -- # set +x 00:20:02.957 06:59:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.957 06:59:07 -- target/dif.sh@45 -- # for sub in "$@" 00:20:02.957 06:59:07 -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:02.957 06:59:07 -- target/dif.sh@36 -- # local sub_id=1 00:20:02.957 06:59:07 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:02.957 06:59:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.957 06:59:07 -- common/autotest_common.sh@10 -- # set +x 00:20:02.957 06:59:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.957 06:59:07 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:02.957 06:59:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.957 06:59:07 -- common/autotest_common.sh@10 -- # set +x 00:20:02.957 06:59:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.957 06:59:07 -- target/dif.sh@45 -- # for sub in "$@" 00:20:02.957 06:59:07 -- target/dif.sh@46 -- # destroy_subsystem 2 00:20:02.957 06:59:07 -- target/dif.sh@36 -- # local sub_id=2 00:20:02.957 06:59:07 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:02.957 06:59:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.957 06:59:07 -- common/autotest_common.sh@10 -- # set +x 00:20:02.957 06:59:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.957 06:59:07 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:20:02.957 06:59:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.957 06:59:07 -- common/autotest_common.sh@10 -- # set +x 00:20:02.957 06:59:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.957 06:59:07 -- target/dif.sh@115 -- # NULL_DIF=1 00:20:02.957 06:59:07 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:20:02.957 06:59:07 -- target/dif.sh@115 -- # numjobs=2 00:20:02.957 06:59:07 -- target/dif.sh@115 -- # iodepth=8 00:20:02.957 06:59:07 -- target/dif.sh@115 -- # runtime=5 00:20:02.957 06:59:07 -- target/dif.sh@115 -- # files=1 00:20:02.957 06:59:07 -- target/dif.sh@117 -- # create_subsystems 0 1 00:20:02.957 06:59:07 -- target/dif.sh@28 -- # local sub 00:20:02.957 06:59:07 -- target/dif.sh@30 -- # for sub in "$@" 00:20:02.957 06:59:07 -- target/dif.sh@31 -- # create_subsystem 0 00:20:02.957 06:59:07 -- target/dif.sh@18 -- # local sub_id=0 00:20:02.957 06:59:07 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:02.957 06:59:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.957 06:59:07 -- common/autotest_common.sh@10 -- # set +x 00:20:02.957 bdev_null0 00:20:02.957 06:59:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.957 06:59:07 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:02.957 06:59:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.957 06:59:07 -- common/autotest_common.sh@10 -- # set +x 00:20:02.957 06:59:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.957 06:59:07 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:02.957 06:59:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.957 06:59:07 -- common/autotest_common.sh@10 -- # set +x 00:20:02.957 06:59:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.957 06:59:07 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:02.957 06:59:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.957 06:59:07 -- common/autotest_common.sh@10 -- # set +x 00:20:02.957 [2024-12-13 06:59:07.391492] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:02.957 06:59:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.957 06:59:07 -- target/dif.sh@30 -- # for sub in "$@" 00:20:02.957 06:59:07 -- target/dif.sh@31 -- # create_subsystem 1 00:20:02.957 06:59:07 -- target/dif.sh@18 -- # local sub_id=1 00:20:02.957 06:59:07 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:20:02.957 06:59:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.957 06:59:07 -- common/autotest_common.sh@10 -- # set +x 00:20:02.957 bdev_null1 00:20:02.957 06:59:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.957 06:59:07 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:02.957 06:59:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.957 06:59:07 -- common/autotest_common.sh@10 -- # set +x 00:20:02.957 06:59:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.957 06:59:07 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:02.957 06:59:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.957 06:59:07 -- common/autotest_common.sh@10 -- # set +x 00:20:02.957 06:59:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.957 06:59:07 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:02.957 06:59:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.957 06:59:07 -- common/autotest_common.sh@10 -- # set +x 00:20:02.957 06:59:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.957 06:59:07 -- target/dif.sh@118 -- # fio /dev/fd/62 00:20:02.957 06:59:07 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:20:02.957 06:59:07 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:20:02.957 06:59:07 -- nvmf/common.sh@520 -- # config=() 00:20:02.957 06:59:07 -- nvmf/common.sh@520 -- # local subsystem config 00:20:02.957 06:59:07 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:02.957 06:59:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:02.957 06:59:07 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:02.957 06:59:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:02.957 { 00:20:02.957 "params": { 00:20:02.957 "name": "Nvme$subsystem", 00:20:02.957 "trtype": "$TEST_TRANSPORT", 00:20:02.957 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:02.957 "adrfam": "ipv4", 00:20:02.957 "trsvcid": "$NVMF_PORT", 00:20:02.957 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:02.957 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:02.957 "hdgst": ${hdgst:-false}, 00:20:02.957 "ddgst": ${ddgst:-false} 00:20:02.957 }, 00:20:02.957 "method": "bdev_nvme_attach_controller" 00:20:02.957 } 00:20:02.957 EOF 00:20:02.957 )") 00:20:02.957 06:59:07 -- target/dif.sh@82 -- # gen_fio_conf 00:20:02.957 06:59:07 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:20:02.957 06:59:07 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:02.957 06:59:07 -- target/dif.sh@54 -- # local file 00:20:02.957 06:59:07 -- common/autotest_common.sh@1328 -- # local sanitizers 00:20:02.957 06:59:07 -- target/dif.sh@56 -- # cat 00:20:02.957 06:59:07 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:02.957 06:59:07 -- common/autotest_common.sh@1330 -- # shift 00:20:02.957 06:59:07 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:20:02.957 06:59:07 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:20:02.957 06:59:07 -- nvmf/common.sh@542 -- # cat 00:20:02.957 06:59:07 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:02.957 06:59:07 -- target/dif.sh@72 -- # (( file = 1 )) 00:20:02.957 06:59:07 -- common/autotest_common.sh@1334 -- # grep libasan 00:20:02.957 06:59:07 -- target/dif.sh@72 -- # (( file <= files )) 00:20:02.957 06:59:07 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:20:02.957 06:59:07 -- target/dif.sh@73 -- # cat 00:20:02.957 06:59:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:02.957 06:59:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:02.957 { 00:20:02.957 "params": { 00:20:02.957 "name": "Nvme$subsystem", 00:20:02.957 "trtype": "$TEST_TRANSPORT", 00:20:02.957 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:02.957 "adrfam": "ipv4", 00:20:02.957 "trsvcid": "$NVMF_PORT", 00:20:02.957 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:02.957 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:02.957 "hdgst": ${hdgst:-false}, 00:20:02.957 "ddgst": ${ddgst:-false} 00:20:02.957 }, 00:20:02.957 "method": "bdev_nvme_attach_controller" 00:20:02.957 } 00:20:02.957 EOF 00:20:02.957 )") 00:20:02.957 06:59:07 -- target/dif.sh@72 -- # (( file++ )) 00:20:02.957 06:59:07 -- target/dif.sh@72 -- # (( file <= files )) 00:20:02.957 06:59:07 -- nvmf/common.sh@542 -- # cat 00:20:02.957 06:59:07 -- nvmf/common.sh@544 -- # jq . 00:20:02.957 06:59:07 -- nvmf/common.sh@545 -- # IFS=, 00:20:02.957 06:59:07 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:20:02.957 "params": { 00:20:02.957 "name": "Nvme0", 00:20:02.957 "trtype": "tcp", 00:20:02.957 "traddr": "10.0.0.2", 00:20:02.957 "adrfam": "ipv4", 00:20:02.957 "trsvcid": "4420", 00:20:02.957 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:02.957 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:02.957 "hdgst": false, 00:20:02.957 "ddgst": false 00:20:02.957 }, 00:20:02.957 "method": "bdev_nvme_attach_controller" 00:20:02.957 },{ 00:20:02.957 "params": { 00:20:02.957 "name": "Nvme1", 00:20:02.957 "trtype": "tcp", 00:20:02.957 "traddr": "10.0.0.2", 00:20:02.957 "adrfam": "ipv4", 00:20:02.957 "trsvcid": "4420", 00:20:02.957 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:02.957 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:02.957 "hdgst": false, 00:20:02.957 "ddgst": false 00:20:02.957 }, 00:20:02.957 "method": "bdev_nvme_attach_controller" 00:20:02.957 }' 00:20:02.957 06:59:07 -- common/autotest_common.sh@1334 -- # asan_lib= 00:20:02.957 06:59:07 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:20:02.958 06:59:07 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:20:02.958 06:59:07 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:02.958 06:59:07 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:20:02.958 06:59:07 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:20:03.216 06:59:07 -- common/autotest_common.sh@1334 -- # asan_lib= 00:20:03.216 06:59:07 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:20:03.216 06:59:07 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:03.216 06:59:07 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:03.216 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:20:03.216 ... 00:20:03.216 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:20:03.216 ... 00:20:03.216 fio-3.35 00:20:03.216 Starting 4 threads 00:20:03.783 [2024-12-13 06:59:08.003009] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:20:03.783 [2024-12-13 06:59:08.003089] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:20:09.048 00:20:09.048 filename0: (groupid=0, jobs=1): err= 0: pid=86926: Fri Dec 13 06:59:13 2024 00:20:09.048 read: IOPS=2192, BW=17.1MiB/s (18.0MB/s)(85.7MiB/5003msec) 00:20:09.048 slat (nsec): min=6763, max=63601, avg=12174.52, stdev=5403.91 00:20:09.048 clat (usec): min=1190, max=7804, avg=3611.41, stdev=895.43 00:20:09.048 lat (usec): min=1198, max=7830, avg=3623.59, stdev=895.41 00:20:09.048 clat percentiles (usec): 00:20:09.048 | 1.00th=[ 1876], 5.00th=[ 1991], 10.00th=[ 2114], 20.00th=[ 2802], 00:20:09.048 | 30.00th=[ 2999], 40.00th=[ 3621], 50.00th=[ 3752], 60.00th=[ 3949], 00:20:09.048 | 70.00th=[ 4178], 80.00th=[ 4555], 90.00th=[ 4686], 95.00th=[ 4817], 00:20:09.048 | 99.00th=[ 4948], 99.50th=[ 5014], 99.90th=[ 5145], 99.95th=[ 7570], 00:20:09.048 | 99.99th=[ 7635] 00:20:09.048 bw ( KiB/s): min=16000, max=18384, per=25.79%, avg=17482.67, stdev=997.56, samples=9 00:20:09.048 iops : min= 2000, max= 2298, avg=2185.22, stdev=124.86, samples=9 00:20:09.048 lat (msec) : 2=5.93%, 4=56.40%, 10=37.66% 00:20:09.048 cpu : usr=91.06%, sys=8.00%, ctx=8, majf=0, minf=0 00:20:09.048 IO depths : 1=0.1%, 2=8.8%, 4=58.9%, 8=32.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:09.048 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:09.048 complete : 0=0.0%, 4=96.7%, 8=3.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:09.048 issued rwts: total=10971,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:09.048 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:09.048 filename0: (groupid=0, jobs=1): err= 0: pid=86927: Fri Dec 13 06:59:13 2024 00:20:09.048 read: IOPS=2194, BW=17.1MiB/s (18.0MB/s)(85.8MiB/5002msec) 00:20:09.048 slat (nsec): min=6940, max=59142, avg=15636.06, stdev=5246.56 00:20:09.048 clat (usec): min=1185, max=6752, avg=3598.53, stdev=882.82 00:20:09.048 lat (usec): min=1198, max=6767, avg=3614.17, stdev=882.48 00:20:09.048 clat percentiles (usec): 00:20:09.048 | 1.00th=[ 1893], 5.00th=[ 2008], 10.00th=[ 2114], 20.00th=[ 2769], 00:20:09.048 | 30.00th=[ 2966], 40.00th=[ 3589], 50.00th=[ 3752], 60.00th=[ 3949], 00:20:09.048 | 70.00th=[ 4178], 80.00th=[ 4555], 90.00th=[ 4686], 95.00th=[ 4752], 00:20:09.048 | 99.00th=[ 4948], 99.50th=[ 4948], 99.90th=[ 5014], 99.95th=[ 5080], 00:20:09.048 | 99.99th=[ 5145] 00:20:09.048 bw ( KiB/s): min=16000, max=18336, per=25.81%, avg=17499.11, stdev=977.90, samples=9 00:20:09.048 iops : min= 2000, max= 2292, avg=2187.33, stdev=122.19, samples=9 00:20:09.048 lat (msec) : 2=4.91%, 4=58.17%, 10=36.93% 00:20:09.048 cpu : usr=91.60%, sys=7.40%, ctx=10, majf=0, minf=9 00:20:09.048 IO depths : 1=0.1%, 2=8.8%, 4=58.9%, 8=32.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:09.048 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:09.048 complete : 0=0.0%, 4=96.7%, 8=3.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:09.048 issued rwts: total=10979,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:09.048 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:09.048 filename1: (groupid=0, jobs=1): err= 0: pid=86928: Fri Dec 13 06:59:13 2024 00:20:09.048 read: IOPS=2195, BW=17.2MiB/s (18.0MB/s)(85.8MiB/5001msec) 00:20:09.048 slat (nsec): min=7472, max=65180, avg=15348.15, stdev=4965.09 00:20:09.048 clat (usec): min=1180, max=6771, avg=3599.84, stdev=882.66 00:20:09.048 lat (usec): min=1194, max=6786, avg=3615.19, stdev=882.72 00:20:09.048 clat percentiles (usec): 00:20:09.048 | 1.00th=[ 1893], 5.00th=[ 2008], 10.00th=[ 2114], 20.00th=[ 2802], 00:20:09.048 | 30.00th=[ 2966], 40.00th=[ 3589], 50.00th=[ 3752], 60.00th=[ 3949], 00:20:09.048 | 70.00th=[ 4178], 80.00th=[ 4555], 90.00th=[ 4686], 95.00th=[ 4817], 00:20:09.048 | 99.00th=[ 4948], 99.50th=[ 4948], 99.90th=[ 5080], 99.95th=[ 5080], 00:20:09.048 | 99.99th=[ 5145] 00:20:09.048 bw ( KiB/s): min=16000, max=18336, per=25.82%, avg=17502.78, stdev=975.41, samples=9 00:20:09.048 iops : min= 2000, max= 2292, avg=2187.78, stdev=121.89, samples=9 00:20:09.048 lat (msec) : 2=4.75%, 4=58.37%, 10=36.88% 00:20:09.048 cpu : usr=91.56%, sys=7.54%, ctx=8, majf=0, minf=9 00:20:09.048 IO depths : 1=0.1%, 2=8.8%, 4=58.9%, 8=32.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:09.048 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:09.048 complete : 0=0.0%, 4=96.7%, 8=3.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:09.048 issued rwts: total=10979,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:09.048 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:09.048 filename1: (groupid=0, jobs=1): err= 0: pid=86929: Fri Dec 13 06:59:13 2024 00:20:09.048 read: IOPS=1892, BW=14.8MiB/s (15.5MB/s)(74.0MiB/5003msec) 00:20:09.048 slat (nsec): min=6792, max=63317, avg=12229.30, stdev=5614.22 00:20:09.048 clat (usec): min=678, max=8526, avg=4179.68, stdev=887.42 00:20:09.048 lat (usec): min=687, max=8554, avg=4191.91, stdev=886.71 00:20:09.048 clat percentiles (usec): 00:20:09.048 | 1.00th=[ 1188], 5.00th=[ 2769], 10.00th=[ 3458], 20.00th=[ 3654], 00:20:09.048 | 30.00th=[ 3785], 40.00th=[ 3949], 50.00th=[ 4621], 60.00th=[ 4686], 00:20:09.048 | 70.00th=[ 4752], 80.00th=[ 4817], 90.00th=[ 4948], 95.00th=[ 4948], 00:20:09.048 | 99.00th=[ 5342], 99.50th=[ 5997], 99.90th=[ 6718], 99.95th=[ 8291], 00:20:09.048 | 99.99th=[ 8586] 00:20:09.048 bw ( KiB/s): min=13184, max=19209, per=22.66%, avg=15362.78, stdev=2585.18, samples=9 00:20:09.048 iops : min= 1648, max= 2401, avg=1920.33, stdev=323.12, samples=9 00:20:09.048 lat (usec) : 750=0.06%, 1000=0.16% 00:20:09.048 lat (msec) : 2=4.57%, 4=37.06%, 10=58.14% 00:20:09.048 cpu : usr=91.40%, sys=7.76%, ctx=15, majf=0, minf=0 00:20:09.048 IO depths : 1=0.1%, 2=20.6%, 4=52.4%, 8=26.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:09.048 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:09.048 complete : 0=0.0%, 4=91.9%, 8=8.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:09.048 issued rwts: total=9468,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:09.048 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:09.048 00:20:09.048 Run status group 0 (all jobs): 00:20:09.048 READ: bw=66.2MiB/s (69.4MB/s), 14.8MiB/s-17.2MiB/s (15.5MB/s-18.0MB/s), io=331MiB (347MB), run=5001-5003msec 00:20:09.048 06:59:13 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:20:09.048 06:59:13 -- target/dif.sh@43 -- # local sub 00:20:09.048 06:59:13 -- target/dif.sh@45 -- # for sub in "$@" 00:20:09.048 06:59:13 -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:09.048 06:59:13 -- target/dif.sh@36 -- # local sub_id=0 00:20:09.048 06:59:13 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:09.048 06:59:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.048 06:59:13 -- common/autotest_common.sh@10 -- # set +x 00:20:09.048 06:59:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.048 06:59:13 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:09.048 06:59:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.048 06:59:13 -- common/autotest_common.sh@10 -- # set +x 00:20:09.048 06:59:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.048 06:59:13 -- target/dif.sh@45 -- # for sub in "$@" 00:20:09.048 06:59:13 -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:09.048 06:59:13 -- target/dif.sh@36 -- # local sub_id=1 00:20:09.049 06:59:13 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:09.049 06:59:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.049 06:59:13 -- common/autotest_common.sh@10 -- # set +x 00:20:09.049 06:59:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.049 06:59:13 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:09.049 06:59:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.049 06:59:13 -- common/autotest_common.sh@10 -- # set +x 00:20:09.049 06:59:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.049 00:20:09.049 real 0m22.965s 00:20:09.049 user 2m3.756s 00:20:09.049 sys 0m8.192s 00:20:09.049 06:59:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:09.049 ************************************ 00:20:09.049 END TEST fio_dif_rand_params 00:20:09.049 06:59:13 -- common/autotest_common.sh@10 -- # set +x 00:20:09.049 ************************************ 00:20:09.049 06:59:13 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:20:09.049 06:59:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:09.049 06:59:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:09.049 06:59:13 -- common/autotest_common.sh@10 -- # set +x 00:20:09.049 ************************************ 00:20:09.049 START TEST fio_dif_digest 00:20:09.049 ************************************ 00:20:09.049 06:59:13 -- common/autotest_common.sh@1114 -- # fio_dif_digest 00:20:09.049 06:59:13 -- target/dif.sh@123 -- # local NULL_DIF 00:20:09.049 06:59:13 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:20:09.049 06:59:13 -- target/dif.sh@125 -- # local hdgst ddgst 00:20:09.049 06:59:13 -- target/dif.sh@127 -- # NULL_DIF=3 00:20:09.049 06:59:13 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:20:09.049 06:59:13 -- target/dif.sh@127 -- # numjobs=3 00:20:09.049 06:59:13 -- target/dif.sh@127 -- # iodepth=3 00:20:09.049 06:59:13 -- target/dif.sh@127 -- # runtime=10 00:20:09.049 06:59:13 -- target/dif.sh@128 -- # hdgst=true 00:20:09.049 06:59:13 -- target/dif.sh@128 -- # ddgst=true 00:20:09.049 06:59:13 -- target/dif.sh@130 -- # create_subsystems 0 00:20:09.049 06:59:13 -- target/dif.sh@28 -- # local sub 00:20:09.049 06:59:13 -- target/dif.sh@30 -- # for sub in "$@" 00:20:09.049 06:59:13 -- target/dif.sh@31 -- # create_subsystem 0 00:20:09.049 06:59:13 -- target/dif.sh@18 -- # local sub_id=0 00:20:09.049 06:59:13 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:20:09.049 06:59:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.049 06:59:13 -- common/autotest_common.sh@10 -- # set +x 00:20:09.049 bdev_null0 00:20:09.049 06:59:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.049 06:59:13 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:09.049 06:59:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.049 06:59:13 -- common/autotest_common.sh@10 -- # set +x 00:20:09.049 06:59:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.049 06:59:13 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:09.049 06:59:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.049 06:59:13 -- common/autotest_common.sh@10 -- # set +x 00:20:09.049 06:59:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.049 06:59:13 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:09.049 06:59:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.049 06:59:13 -- common/autotest_common.sh@10 -- # set +x 00:20:09.049 [2024-12-13 06:59:13.398131] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:09.049 06:59:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.049 06:59:13 -- target/dif.sh@131 -- # fio /dev/fd/62 00:20:09.049 06:59:13 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:20:09.049 06:59:13 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:09.049 06:59:13 -- nvmf/common.sh@520 -- # config=() 00:20:09.049 06:59:13 -- nvmf/common.sh@520 -- # local subsystem config 00:20:09.049 06:59:13 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:09.049 06:59:13 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:09.049 06:59:13 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:09.049 { 00:20:09.049 "params": { 00:20:09.049 "name": "Nvme$subsystem", 00:20:09.049 "trtype": "$TEST_TRANSPORT", 00:20:09.049 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:09.049 "adrfam": "ipv4", 00:20:09.049 "trsvcid": "$NVMF_PORT", 00:20:09.049 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:09.049 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:09.049 "hdgst": ${hdgst:-false}, 00:20:09.049 "ddgst": ${ddgst:-false} 00:20:09.049 }, 00:20:09.049 "method": "bdev_nvme_attach_controller" 00:20:09.049 } 00:20:09.049 EOF 00:20:09.049 )") 00:20:09.049 06:59:13 -- target/dif.sh@82 -- # gen_fio_conf 00:20:09.049 06:59:13 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:09.049 06:59:13 -- target/dif.sh@54 -- # local file 00:20:09.049 06:59:13 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:20:09.049 06:59:13 -- target/dif.sh@56 -- # cat 00:20:09.049 06:59:13 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:09.049 06:59:13 -- common/autotest_common.sh@1328 -- # local sanitizers 00:20:09.049 06:59:13 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:09.049 06:59:13 -- common/autotest_common.sh@1330 -- # shift 00:20:09.049 06:59:13 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:20:09.049 06:59:13 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:20:09.049 06:59:13 -- nvmf/common.sh@542 -- # cat 00:20:09.049 06:59:13 -- target/dif.sh@72 -- # (( file = 1 )) 00:20:09.049 06:59:13 -- target/dif.sh@72 -- # (( file <= files )) 00:20:09.049 06:59:13 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:09.049 06:59:13 -- common/autotest_common.sh@1334 -- # grep libasan 00:20:09.049 06:59:13 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:20:09.049 06:59:13 -- nvmf/common.sh@544 -- # jq . 00:20:09.049 06:59:13 -- nvmf/common.sh@545 -- # IFS=, 00:20:09.049 06:59:13 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:20:09.049 "params": { 00:20:09.049 "name": "Nvme0", 00:20:09.049 "trtype": "tcp", 00:20:09.049 "traddr": "10.0.0.2", 00:20:09.049 "adrfam": "ipv4", 00:20:09.049 "trsvcid": "4420", 00:20:09.049 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:09.049 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:09.049 "hdgst": true, 00:20:09.049 "ddgst": true 00:20:09.049 }, 00:20:09.049 "method": "bdev_nvme_attach_controller" 00:20:09.049 }' 00:20:09.049 06:59:13 -- common/autotest_common.sh@1334 -- # asan_lib= 00:20:09.049 06:59:13 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:20:09.049 06:59:13 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:20:09.049 06:59:13 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:09.049 06:59:13 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:20:09.049 06:59:13 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:20:09.049 06:59:13 -- common/autotest_common.sh@1334 -- # asan_lib= 00:20:09.049 06:59:13 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:20:09.049 06:59:13 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:09.049 06:59:13 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:09.308 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:20:09.308 ... 00:20:09.308 fio-3.35 00:20:09.308 Starting 3 threads 00:20:09.567 [2024-12-13 06:59:13.936457] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:20:09.567 [2024-12-13 06:59:13.936555] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:20:21.773 00:20:21.773 filename0: (groupid=0, jobs=1): err= 0: pid=87035: Fri Dec 13 06:59:24 2024 00:20:21.773 read: IOPS=243, BW=30.4MiB/s (31.9MB/s)(305MiB/10009msec) 00:20:21.773 slat (nsec): min=6990, max=59049, avg=11140.32, stdev=5682.36 00:20:21.773 clat (usec): min=8634, max=15034, avg=12297.20, stdev=674.17 00:20:21.773 lat (usec): min=8642, max=15060, avg=12308.34, stdev=674.82 00:20:21.773 clat percentiles (usec): 00:20:21.773 | 1.00th=[11338], 5.00th=[11469], 10.00th=[11600], 20.00th=[11731], 00:20:21.773 | 30.00th=[11863], 40.00th=[11994], 50.00th=[12125], 60.00th=[12256], 00:20:21.773 | 70.00th=[12518], 80.00th=[12911], 90.00th=[13435], 95.00th=[13566], 00:20:21.773 | 99.00th=[13829], 99.50th=[14091], 99.90th=[15008], 99.95th=[15008], 00:20:21.773 | 99.99th=[15008] 00:20:21.773 bw ( KiB/s): min=29952, max=33024, per=33.36%, avg=31161.37, stdev=863.05, samples=19 00:20:21.773 iops : min= 234, max= 258, avg=243.42, stdev= 6.76, samples=19 00:20:21.773 lat (msec) : 10=0.12%, 20=99.88% 00:20:21.773 cpu : usr=91.15%, sys=8.16%, ctx=14, majf=0, minf=0 00:20:21.773 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:21.773 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.773 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.773 issued rwts: total=2436,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:21.773 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:21.773 filename0: (groupid=0, jobs=1): err= 0: pid=87036: Fri Dec 13 06:59:24 2024 00:20:21.773 read: IOPS=243, BW=30.4MiB/s (31.9MB/s)(304MiB/10001msec) 00:20:21.773 slat (usec): min=6, max=216, avg=10.96, stdev= 7.11 00:20:21.773 clat (usec): min=11275, max=14490, avg=12304.71, stdev=656.25 00:20:21.773 lat (usec): min=11283, max=14501, avg=12315.67, stdev=657.07 00:20:21.773 clat percentiles (usec): 00:20:21.773 | 1.00th=[11338], 5.00th=[11469], 10.00th=[11600], 20.00th=[11731], 00:20:21.773 | 30.00th=[11863], 40.00th=[11994], 50.00th=[12125], 60.00th=[12256], 00:20:21.773 | 70.00th=[12518], 80.00th=[12911], 90.00th=[13435], 95.00th=[13566], 00:20:21.773 | 99.00th=[13829], 99.50th=[13960], 99.90th=[14484], 99.95th=[14484], 00:20:21.773 | 99.99th=[14484] 00:20:21.773 bw ( KiB/s): min=29952, max=32256, per=33.36%, avg=31167.84, stdev=736.32, samples=19 00:20:21.773 iops : min= 234, max= 252, avg=243.47, stdev= 5.77, samples=19 00:20:21.773 lat (msec) : 20=100.00% 00:20:21.773 cpu : usr=91.35%, sys=7.75%, ctx=122, majf=0, minf=0 00:20:21.773 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:21.773 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.773 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.773 issued rwts: total=2433,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:21.773 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:21.773 filename0: (groupid=0, jobs=1): err= 0: pid=87037: Fri Dec 13 06:59:24 2024 00:20:21.773 read: IOPS=243, BW=30.4MiB/s (31.9MB/s)(305MiB/10006msec) 00:20:21.773 slat (nsec): min=6955, max=52193, avg=10550.84, stdev=4449.20 00:20:21.773 clat (usec): min=6777, max=14696, avg=12296.07, stdev=688.08 00:20:21.773 lat (usec): min=6784, max=14748, avg=12306.63, stdev=688.71 00:20:21.773 clat percentiles (usec): 00:20:21.773 | 1.00th=[11338], 5.00th=[11469], 10.00th=[11600], 20.00th=[11731], 00:20:21.773 | 30.00th=[11863], 40.00th=[11994], 50.00th=[12125], 60.00th=[12387], 00:20:21.773 | 70.00th=[12518], 80.00th=[12911], 90.00th=[13435], 95.00th=[13566], 00:20:21.773 | 99.00th=[13829], 99.50th=[13960], 99.90th=[14615], 99.95th=[14746], 00:20:21.773 | 99.99th=[14746] 00:20:21.773 bw ( KiB/s): min=29952, max=32256, per=33.37%, avg=31171.00, stdev=730.92, samples=19 00:20:21.773 iops : min= 234, max= 252, avg=243.47, stdev= 5.77, samples=19 00:20:21.773 lat (msec) : 10=0.12%, 20=99.88% 00:20:21.773 cpu : usr=91.98%, sys=7.42%, ctx=16, majf=0, minf=0 00:20:21.773 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:21.773 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.773 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.773 issued rwts: total=2436,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:21.773 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:21.773 00:20:21.773 Run status group 0 (all jobs): 00:20:21.773 READ: bw=91.2MiB/s (95.7MB/s), 30.4MiB/s-30.4MiB/s (31.9MB/s-31.9MB/s), io=913MiB (957MB), run=10001-10009msec 00:20:21.773 06:59:24 -- target/dif.sh@132 -- # destroy_subsystems 0 00:20:21.773 06:59:24 -- target/dif.sh@43 -- # local sub 00:20:21.773 06:59:24 -- target/dif.sh@45 -- # for sub in "$@" 00:20:21.773 06:59:24 -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:21.773 06:59:24 -- target/dif.sh@36 -- # local sub_id=0 00:20:21.773 06:59:24 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:21.773 06:59:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.773 06:59:24 -- common/autotest_common.sh@10 -- # set +x 00:20:21.773 06:59:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.773 06:59:24 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:21.773 06:59:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.773 06:59:24 -- common/autotest_common.sh@10 -- # set +x 00:20:21.773 06:59:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.773 00:20:21.773 real 0m10.843s 00:20:21.773 user 0m27.967s 00:20:21.773 sys 0m2.544s 00:20:21.773 06:59:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:21.773 ************************************ 00:20:21.773 END TEST fio_dif_digest 00:20:21.773 ************************************ 00:20:21.773 06:59:24 -- common/autotest_common.sh@10 -- # set +x 00:20:21.773 06:59:24 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:20:21.773 06:59:24 -- target/dif.sh@147 -- # nvmftestfini 00:20:21.773 06:59:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:21.773 06:59:24 -- nvmf/common.sh@116 -- # sync 00:20:21.773 06:59:24 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:21.773 06:59:24 -- nvmf/common.sh@119 -- # set +e 00:20:21.773 06:59:24 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:21.773 06:59:24 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:21.773 rmmod nvme_tcp 00:20:21.773 rmmod nvme_fabrics 00:20:21.773 rmmod nvme_keyring 00:20:21.773 06:59:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:21.773 06:59:24 -- nvmf/common.sh@123 -- # set -e 00:20:21.773 06:59:24 -- nvmf/common.sh@124 -- # return 0 00:20:21.773 06:59:24 -- nvmf/common.sh@477 -- # '[' -n 86272 ']' 00:20:21.773 06:59:24 -- nvmf/common.sh@478 -- # killprocess 86272 00:20:21.773 06:59:24 -- common/autotest_common.sh@936 -- # '[' -z 86272 ']' 00:20:21.773 06:59:24 -- common/autotest_common.sh@940 -- # kill -0 86272 00:20:21.773 06:59:24 -- common/autotest_common.sh@941 -- # uname 00:20:21.773 06:59:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:21.773 06:59:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86272 00:20:21.773 06:59:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:21.773 06:59:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:21.773 killing process with pid 86272 00:20:21.773 06:59:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86272' 00:20:21.773 06:59:24 -- common/autotest_common.sh@955 -- # kill 86272 00:20:21.773 06:59:24 -- common/autotest_common.sh@960 -- # wait 86272 00:20:21.773 06:59:24 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:20:21.773 06:59:24 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:21.773 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:21.773 Waiting for block devices as requested 00:20:21.773 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:20:21.773 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:20:21.773 06:59:25 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:21.773 06:59:25 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:21.773 06:59:25 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:21.773 06:59:25 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:21.773 06:59:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:21.773 06:59:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:21.773 06:59:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:21.773 06:59:25 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:21.773 00:20:21.773 real 0m58.827s 00:20:21.773 user 3m46.318s 00:20:21.773 sys 0m19.181s 00:20:21.773 06:59:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:21.773 06:59:25 -- common/autotest_common.sh@10 -- # set +x 00:20:21.773 ************************************ 00:20:21.773 END TEST nvmf_dif 00:20:21.773 ************************************ 00:20:21.774 06:59:25 -- spdk/autotest.sh@288 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:20:21.774 06:59:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:21.774 06:59:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:21.774 06:59:25 -- common/autotest_common.sh@10 -- # set +x 00:20:21.774 ************************************ 00:20:21.774 START TEST nvmf_abort_qd_sizes 00:20:21.774 ************************************ 00:20:21.774 06:59:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:20:21.774 * Looking for test storage... 00:20:21.774 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:21.774 06:59:25 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:21.774 06:59:25 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:21.774 06:59:25 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:21.774 06:59:25 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:21.774 06:59:25 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:21.774 06:59:25 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:21.774 06:59:25 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:21.774 06:59:25 -- scripts/common.sh@335 -- # IFS=.-: 00:20:21.774 06:59:25 -- scripts/common.sh@335 -- # read -ra ver1 00:20:21.774 06:59:25 -- scripts/common.sh@336 -- # IFS=.-: 00:20:21.774 06:59:25 -- scripts/common.sh@336 -- # read -ra ver2 00:20:21.774 06:59:25 -- scripts/common.sh@337 -- # local 'op=<' 00:20:21.774 06:59:25 -- scripts/common.sh@339 -- # ver1_l=2 00:20:21.774 06:59:25 -- scripts/common.sh@340 -- # ver2_l=1 00:20:21.774 06:59:25 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:21.774 06:59:25 -- scripts/common.sh@343 -- # case "$op" in 00:20:21.774 06:59:25 -- scripts/common.sh@344 -- # : 1 00:20:21.774 06:59:25 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:21.774 06:59:25 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:21.774 06:59:25 -- scripts/common.sh@364 -- # decimal 1 00:20:21.774 06:59:25 -- scripts/common.sh@352 -- # local d=1 00:20:21.774 06:59:25 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:21.774 06:59:25 -- scripts/common.sh@354 -- # echo 1 00:20:21.774 06:59:25 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:21.774 06:59:25 -- scripts/common.sh@365 -- # decimal 2 00:20:21.774 06:59:25 -- scripts/common.sh@352 -- # local d=2 00:20:21.774 06:59:25 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:21.774 06:59:25 -- scripts/common.sh@354 -- # echo 2 00:20:21.774 06:59:25 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:21.774 06:59:25 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:21.774 06:59:25 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:21.774 06:59:25 -- scripts/common.sh@367 -- # return 0 00:20:21.774 06:59:25 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:21.774 06:59:25 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:21.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.774 --rc genhtml_branch_coverage=1 00:20:21.774 --rc genhtml_function_coverage=1 00:20:21.774 --rc genhtml_legend=1 00:20:21.774 --rc geninfo_all_blocks=1 00:20:21.774 --rc geninfo_unexecuted_blocks=1 00:20:21.774 00:20:21.774 ' 00:20:21.774 06:59:25 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:21.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.774 --rc genhtml_branch_coverage=1 00:20:21.774 --rc genhtml_function_coverage=1 00:20:21.774 --rc genhtml_legend=1 00:20:21.774 --rc geninfo_all_blocks=1 00:20:21.774 --rc geninfo_unexecuted_blocks=1 00:20:21.774 00:20:21.774 ' 00:20:21.774 06:59:25 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:21.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.774 --rc genhtml_branch_coverage=1 00:20:21.774 --rc genhtml_function_coverage=1 00:20:21.774 --rc genhtml_legend=1 00:20:21.774 --rc geninfo_all_blocks=1 00:20:21.774 --rc geninfo_unexecuted_blocks=1 00:20:21.774 00:20:21.774 ' 00:20:21.774 06:59:25 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:21.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.774 --rc genhtml_branch_coverage=1 00:20:21.774 --rc genhtml_function_coverage=1 00:20:21.774 --rc genhtml_legend=1 00:20:21.774 --rc geninfo_all_blocks=1 00:20:21.774 --rc geninfo_unexecuted_blocks=1 00:20:21.774 00:20:21.774 ' 00:20:21.774 06:59:25 -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:21.774 06:59:25 -- nvmf/common.sh@7 -- # uname -s 00:20:21.774 06:59:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:21.774 06:59:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:21.774 06:59:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:21.774 06:59:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:21.774 06:59:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:21.774 06:59:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:21.774 06:59:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:21.774 06:59:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:21.774 06:59:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:21.774 06:59:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:21.774 06:59:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:657f0c9c-3891-4064-9841-3d87a573b6e7 00:20:21.774 06:59:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=657f0c9c-3891-4064-9841-3d87a573b6e7 00:20:21.774 06:59:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:21.774 06:59:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:21.774 06:59:25 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:21.774 06:59:25 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:21.774 06:59:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:21.774 06:59:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:21.774 06:59:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:21.774 06:59:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.774 06:59:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.774 06:59:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.774 06:59:25 -- paths/export.sh@5 -- # export PATH 00:20:21.774 06:59:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.774 06:59:25 -- nvmf/common.sh@46 -- # : 0 00:20:21.774 06:59:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:21.774 06:59:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:21.774 06:59:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:21.774 06:59:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:21.774 06:59:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:21.774 06:59:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:21.774 06:59:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:21.774 06:59:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:21.774 06:59:25 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:20:21.774 06:59:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:21.774 06:59:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:21.774 06:59:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:21.774 06:59:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:21.774 06:59:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:21.774 06:59:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:21.774 06:59:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:21.774 06:59:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:21.774 06:59:25 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:21.774 06:59:25 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:21.774 06:59:25 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:21.774 06:59:25 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:21.774 06:59:25 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:21.774 06:59:25 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:21.774 06:59:25 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:21.774 06:59:25 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:21.774 06:59:25 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:21.774 06:59:25 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:21.774 06:59:25 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:21.774 06:59:25 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:21.774 06:59:25 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:21.774 06:59:25 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:21.774 06:59:25 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:21.774 06:59:25 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:21.774 06:59:25 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:21.774 06:59:25 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:21.774 06:59:25 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:21.774 06:59:25 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:21.774 Cannot find device "nvmf_tgt_br" 00:20:21.774 06:59:25 -- nvmf/common.sh@154 -- # true 00:20:21.774 06:59:25 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:21.774 Cannot find device "nvmf_tgt_br2" 00:20:21.774 06:59:25 -- nvmf/common.sh@155 -- # true 00:20:21.774 06:59:25 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:21.774 06:59:25 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:21.774 Cannot find device "nvmf_tgt_br" 00:20:21.774 06:59:25 -- nvmf/common.sh@157 -- # true 00:20:21.774 06:59:25 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:21.774 Cannot find device "nvmf_tgt_br2" 00:20:21.775 06:59:25 -- nvmf/common.sh@158 -- # true 00:20:21.775 06:59:25 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:21.775 06:59:25 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:21.775 06:59:25 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:21.775 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:21.775 06:59:25 -- nvmf/common.sh@161 -- # true 00:20:21.775 06:59:25 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:21.775 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:21.775 06:59:25 -- nvmf/common.sh@162 -- # true 00:20:21.775 06:59:25 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:21.775 06:59:25 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:21.775 06:59:25 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:21.775 06:59:25 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:21.775 06:59:25 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:21.775 06:59:25 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:21.775 06:59:25 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:21.775 06:59:25 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:21.775 06:59:25 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:21.775 06:59:25 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:21.775 06:59:25 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:21.775 06:59:25 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:21.775 06:59:25 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:21.775 06:59:25 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:21.775 06:59:25 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:21.775 06:59:25 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:21.775 06:59:25 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:21.775 06:59:25 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:21.775 06:59:25 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:21.775 06:59:25 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:21.775 06:59:25 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:21.775 06:59:25 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:21.775 06:59:25 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:21.775 06:59:25 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:21.775 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:21.775 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:20:21.775 00:20:21.775 --- 10.0.0.2 ping statistics --- 00:20:21.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:21.775 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:20:21.775 06:59:25 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:21.775 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:21.775 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:20:21.775 00:20:21.775 --- 10.0.0.3 ping statistics --- 00:20:21.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:21.775 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:20:21.775 06:59:25 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:21.775 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:21.775 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:20:21.775 00:20:21.775 --- 10.0.0.1 ping statistics --- 00:20:21.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:21.775 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:20:21.775 06:59:25 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:21.775 06:59:25 -- nvmf/common.sh@421 -- # return 0 00:20:21.775 06:59:25 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:20:21.775 06:59:25 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:22.034 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:22.034 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:20:22.034 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:20:22.034 06:59:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:22.034 06:59:26 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:22.034 06:59:26 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:22.034 06:59:26 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:22.034 06:59:26 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:22.034 06:59:26 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:22.034 06:59:26 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:20:22.034 06:59:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:22.034 06:59:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:22.034 06:59:26 -- common/autotest_common.sh@10 -- # set +x 00:20:22.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:22.294 06:59:26 -- nvmf/common.sh@469 -- # nvmfpid=87634 00:20:22.294 06:59:26 -- nvmf/common.sh@470 -- # waitforlisten 87634 00:20:22.294 06:59:26 -- common/autotest_common.sh@829 -- # '[' -z 87634 ']' 00:20:22.294 06:59:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:22.294 06:59:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:22.294 06:59:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:22.294 06:59:26 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:20:22.294 06:59:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:22.294 06:59:26 -- common/autotest_common.sh@10 -- # set +x 00:20:22.294 [2024-12-13 06:59:26.605567] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:22.294 [2024-12-13 06:59:26.605876] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:22.294 [2024-12-13 06:59:26.750052] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:22.294 [2024-12-13 06:59:26.790979] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:22.294 [2024-12-13 06:59:26.791431] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:22.294 [2024-12-13 06:59:26.791603] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:22.294 [2024-12-13 06:59:26.791754] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:22.294 [2024-12-13 06:59:26.792018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:22.294 [2024-12-13 06:59:26.792187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:22.294 [2024-12-13 06:59:26.792271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:22.294 [2024-12-13 06:59:26.792272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:23.231 06:59:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:23.231 06:59:27 -- common/autotest_common.sh@862 -- # return 0 00:20:23.231 06:59:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:23.231 06:59:27 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:23.231 06:59:27 -- common/autotest_common.sh@10 -- # set +x 00:20:23.231 06:59:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:23.231 06:59:27 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:20:23.231 06:59:27 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:20:23.231 06:59:27 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:20:23.231 06:59:27 -- scripts/common.sh@311 -- # local bdf bdfs 00:20:23.231 06:59:27 -- scripts/common.sh@312 -- # local nvmes 00:20:23.231 06:59:27 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:20:23.231 06:59:27 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:20:23.231 06:59:27 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:20:23.231 06:59:27 -- scripts/common.sh@297 -- # local bdf= 00:20:23.231 06:59:27 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:20:23.231 06:59:27 -- scripts/common.sh@232 -- # local class 00:20:23.231 06:59:27 -- scripts/common.sh@233 -- # local subclass 00:20:23.231 06:59:27 -- scripts/common.sh@234 -- # local progif 00:20:23.231 06:59:27 -- scripts/common.sh@235 -- # printf %02x 1 00:20:23.231 06:59:27 -- scripts/common.sh@235 -- # class=01 00:20:23.231 06:59:27 -- scripts/common.sh@236 -- # printf %02x 8 00:20:23.231 06:59:27 -- scripts/common.sh@236 -- # subclass=08 00:20:23.231 06:59:27 -- scripts/common.sh@237 -- # printf %02x 2 00:20:23.231 06:59:27 -- scripts/common.sh@237 -- # progif=02 00:20:23.231 06:59:27 -- scripts/common.sh@239 -- # hash lspci 00:20:23.231 06:59:27 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:20:23.231 06:59:27 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:20:23.231 06:59:27 -- scripts/common.sh@242 -- # grep -i -- -p02 00:20:23.231 06:59:27 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:20:23.231 06:59:27 -- scripts/common.sh@244 -- # tr -d '"' 00:20:23.231 06:59:27 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:20:23.231 06:59:27 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:20:23.231 06:59:27 -- scripts/common.sh@15 -- # local i 00:20:23.231 06:59:27 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:20:23.231 06:59:27 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:20:23.231 06:59:27 -- scripts/common.sh@24 -- # return 0 00:20:23.231 06:59:27 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:20:23.231 06:59:27 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:20:23.231 06:59:27 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:20:23.231 06:59:27 -- scripts/common.sh@15 -- # local i 00:20:23.231 06:59:27 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:20:23.231 06:59:27 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:20:23.231 06:59:27 -- scripts/common.sh@24 -- # return 0 00:20:23.231 06:59:27 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:20:23.231 06:59:27 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:20:23.231 06:59:27 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:20:23.231 06:59:27 -- scripts/common.sh@322 -- # uname -s 00:20:23.231 06:59:27 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:20:23.231 06:59:27 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:20:23.231 06:59:27 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:20:23.231 06:59:27 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:20:23.231 06:59:27 -- scripts/common.sh@322 -- # uname -s 00:20:23.231 06:59:27 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:20:23.231 06:59:27 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:20:23.231 06:59:27 -- scripts/common.sh@327 -- # (( 2 )) 00:20:23.231 06:59:27 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:20:23.231 06:59:27 -- target/abort_qd_sizes.sh@79 -- # (( 2 > 0 )) 00:20:23.231 06:59:27 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:00:06.0 00:20:23.231 06:59:27 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:20:23.231 06:59:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:23.231 06:59:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:23.231 06:59:27 -- common/autotest_common.sh@10 -- # set +x 00:20:23.231 ************************************ 00:20:23.231 START TEST spdk_target_abort 00:20:23.231 ************************************ 00:20:23.231 06:59:27 -- common/autotest_common.sh@1114 -- # spdk_target 00:20:23.231 06:59:27 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:20:23.232 06:59:27 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:20:23.232 06:59:27 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:06.0 -b spdk_target 00:20:23.232 06:59:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.232 06:59:27 -- common/autotest_common.sh@10 -- # set +x 00:20:23.491 spdk_targetn1 00:20:23.491 06:59:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.491 06:59:27 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:23.491 06:59:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.491 06:59:27 -- common/autotest_common.sh@10 -- # set +x 00:20:23.491 [2024-12-13 06:59:27.805535] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:23.491 06:59:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.491 06:59:27 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:20:23.491 06:59:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.491 06:59:27 -- common/autotest_common.sh@10 -- # set +x 00:20:23.491 06:59:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.491 06:59:27 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:20:23.491 06:59:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.491 06:59:27 -- common/autotest_common.sh@10 -- # set +x 00:20:23.491 06:59:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.491 06:59:27 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:20:23.491 06:59:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.491 06:59:27 -- common/autotest_common.sh@10 -- # set +x 00:20:23.491 [2024-12-13 06:59:27.841708] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:23.491 06:59:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.491 06:59:27 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:20:23.491 06:59:27 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:20:23.491 06:59:27 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:20:23.491 06:59:27 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:20:23.491 06:59:27 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:20:23.491 06:59:27 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:20:23.491 06:59:27 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:20:23.491 06:59:27 -- target/abort_qd_sizes.sh@24 -- # local target r 00:20:23.491 06:59:27 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:20:23.491 06:59:27 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:23.491 06:59:27 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:20:23.491 06:59:27 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:23.491 06:59:27 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:20:23.491 06:59:27 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:23.491 06:59:27 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:20:23.491 06:59:27 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:23.491 06:59:27 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:23.491 06:59:27 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:23.491 06:59:27 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:20:23.491 06:59:27 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:23.491 06:59:27 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:20:26.779 Initializing NVMe Controllers 00:20:26.779 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:20:26.779 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:20:26.779 Initialization complete. Launching workers. 00:20:26.779 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 10397, failed: 0 00:20:26.779 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1039, failed to submit 9358 00:20:26.779 success 810, unsuccess 229, failed 0 00:20:26.779 06:59:31 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:26.779 06:59:31 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:20:30.064 Initializing NVMe Controllers 00:20:30.064 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:20:30.064 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:20:30.064 Initialization complete. Launching workers. 00:20:30.064 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 9000, failed: 0 00:20:30.064 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1178, failed to submit 7822 00:20:30.064 success 405, unsuccess 773, failed 0 00:20:30.064 06:59:34 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:30.064 06:59:34 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:20:33.354 Initializing NVMe Controllers 00:20:33.354 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:20:33.354 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:20:33.354 Initialization complete. Launching workers. 00:20:33.354 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 32324, failed: 0 00:20:33.354 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2424, failed to submit 29900 00:20:33.354 success 455, unsuccess 1969, failed 0 00:20:33.354 06:59:37 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:20:33.354 06:59:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.354 06:59:37 -- common/autotest_common.sh@10 -- # set +x 00:20:33.354 06:59:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.354 06:59:37 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:20:33.354 06:59:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.354 06:59:37 -- common/autotest_common.sh@10 -- # set +x 00:20:33.613 06:59:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.613 06:59:37 -- target/abort_qd_sizes.sh@62 -- # killprocess 87634 00:20:33.613 06:59:37 -- common/autotest_common.sh@936 -- # '[' -z 87634 ']' 00:20:33.613 06:59:37 -- common/autotest_common.sh@940 -- # kill -0 87634 00:20:33.613 06:59:37 -- common/autotest_common.sh@941 -- # uname 00:20:33.613 06:59:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:33.613 06:59:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87634 00:20:33.613 killing process with pid 87634 00:20:33.613 06:59:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:33.613 06:59:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:33.613 06:59:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87634' 00:20:33.613 06:59:37 -- common/autotest_common.sh@955 -- # kill 87634 00:20:33.613 06:59:37 -- common/autotest_common.sh@960 -- # wait 87634 00:20:33.613 ************************************ 00:20:33.613 END TEST spdk_target_abort 00:20:33.613 ************************************ 00:20:33.613 00:20:33.613 real 0m10.366s 00:20:33.613 user 0m42.514s 00:20:33.613 sys 0m2.123s 00:20:33.613 06:59:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:33.613 06:59:38 -- common/autotest_common.sh@10 -- # set +x 00:20:33.613 06:59:38 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:20:33.613 06:59:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:33.613 06:59:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:33.613 06:59:38 -- common/autotest_common.sh@10 -- # set +x 00:20:33.872 ************************************ 00:20:33.872 START TEST kernel_target_abort 00:20:33.872 ************************************ 00:20:33.872 06:59:38 -- common/autotest_common.sh@1114 -- # kernel_target 00:20:33.872 06:59:38 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:20:33.872 06:59:38 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:20:33.872 06:59:38 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:20:33.872 06:59:38 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:20:33.872 06:59:38 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:20:33.872 06:59:38 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:20:33.872 06:59:38 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:20:33.872 06:59:38 -- nvmf/common.sh@627 -- # local block nvme 00:20:33.872 06:59:38 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:20:33.872 06:59:38 -- nvmf/common.sh@630 -- # modprobe nvmet 00:20:33.872 06:59:38 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:20:33.872 06:59:38 -- nvmf/common.sh@635 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:34.131 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:34.131 Waiting for block devices as requested 00:20:34.131 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:20:34.131 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:20:34.390 06:59:38 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:20:34.390 06:59:38 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:20:34.390 06:59:38 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:20:34.390 06:59:38 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:20:34.390 06:59:38 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:20:34.390 No valid GPT data, bailing 00:20:34.390 06:59:38 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:34.390 06:59:38 -- scripts/common.sh@393 -- # pt= 00:20:34.390 06:59:38 -- scripts/common.sh@394 -- # return 1 00:20:34.390 06:59:38 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:20:34.390 06:59:38 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:20:34.390 06:59:38 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n1 ]] 00:20:34.390 06:59:38 -- nvmf/common.sh@640 -- # block_in_use nvme1n1 00:20:34.390 06:59:38 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:20:34.390 06:59:38 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:20:34.390 No valid GPT data, bailing 00:20:34.390 06:59:38 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:20:34.390 06:59:38 -- scripts/common.sh@393 -- # pt= 00:20:34.390 06:59:38 -- scripts/common.sh@394 -- # return 1 00:20:34.390 06:59:38 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n1 00:20:34.390 06:59:38 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:20:34.390 06:59:38 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n2 ]] 00:20:34.390 06:59:38 -- nvmf/common.sh@640 -- # block_in_use nvme1n2 00:20:34.390 06:59:38 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:20:34.390 06:59:38 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:20:34.390 No valid GPT data, bailing 00:20:34.390 06:59:38 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:20:34.649 06:59:38 -- scripts/common.sh@393 -- # pt= 00:20:34.649 06:59:38 -- scripts/common.sh@394 -- # return 1 00:20:34.649 06:59:38 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n2 00:20:34.649 06:59:38 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:20:34.649 06:59:38 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n3 ]] 00:20:34.649 06:59:38 -- nvmf/common.sh@640 -- # block_in_use nvme1n3 00:20:34.649 06:59:38 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:20:34.649 06:59:38 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:20:34.649 No valid GPT data, bailing 00:20:34.649 06:59:38 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:20:34.649 06:59:38 -- scripts/common.sh@393 -- # pt= 00:20:34.649 06:59:38 -- scripts/common.sh@394 -- # return 1 00:20:34.649 06:59:38 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n3 00:20:34.649 06:59:38 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme1n3 ]] 00:20:34.649 06:59:38 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:20:34.650 06:59:38 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:20:34.650 06:59:38 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:20:34.650 06:59:38 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:20:34.650 06:59:38 -- nvmf/common.sh@654 -- # echo 1 00:20:34.650 06:59:38 -- nvmf/common.sh@655 -- # echo /dev/nvme1n3 00:20:34.650 06:59:38 -- nvmf/common.sh@656 -- # echo 1 00:20:34.650 06:59:38 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:20:34.650 06:59:38 -- nvmf/common.sh@663 -- # echo tcp 00:20:34.650 06:59:38 -- nvmf/common.sh@664 -- # echo 4420 00:20:34.650 06:59:38 -- nvmf/common.sh@665 -- # echo ipv4 00:20:34.650 06:59:38 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:20:34.650 06:59:38 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:657f0c9c-3891-4064-9841-3d87a573b6e7 --hostid=657f0c9c-3891-4064-9841-3d87a573b6e7 -a 10.0.0.1 -t tcp -s 4420 00:20:34.650 00:20:34.650 Discovery Log Number of Records 2, Generation counter 2 00:20:34.650 =====Discovery Log Entry 0====== 00:20:34.650 trtype: tcp 00:20:34.650 adrfam: ipv4 00:20:34.650 subtype: current discovery subsystem 00:20:34.650 treq: not specified, sq flow control disable supported 00:20:34.650 portid: 1 00:20:34.650 trsvcid: 4420 00:20:34.650 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:34.650 traddr: 10.0.0.1 00:20:34.650 eflags: none 00:20:34.650 sectype: none 00:20:34.650 =====Discovery Log Entry 1====== 00:20:34.650 trtype: tcp 00:20:34.650 adrfam: ipv4 00:20:34.650 subtype: nvme subsystem 00:20:34.650 treq: not specified, sq flow control disable supported 00:20:34.650 portid: 1 00:20:34.650 trsvcid: 4420 00:20:34.650 subnqn: kernel_target 00:20:34.650 traddr: 10.0.0.1 00:20:34.650 eflags: none 00:20:34.650 sectype: none 00:20:34.650 06:59:39 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:20:34.650 06:59:39 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:20:34.650 06:59:39 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:20:34.650 06:59:39 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:20:34.650 06:59:39 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:20:34.650 06:59:39 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:20:34.650 06:59:39 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:20:34.650 06:59:39 -- target/abort_qd_sizes.sh@24 -- # local target r 00:20:34.650 06:59:39 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:20:34.650 06:59:39 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:34.650 06:59:39 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:20:34.650 06:59:39 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:34.650 06:59:39 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:20:34.650 06:59:39 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:34.650 06:59:39 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:20:34.650 06:59:39 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:34.650 06:59:39 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:20:34.650 06:59:39 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:34.650 06:59:39 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:20:34.650 06:59:39 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:34.650 06:59:39 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:20:37.938 Initializing NVMe Controllers 00:20:37.938 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:20:37.938 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:20:37.938 Initialization complete. Launching workers. 00:20:37.938 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 30877, failed: 0 00:20:37.938 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 30877, failed to submit 0 00:20:37.938 success 0, unsuccess 30877, failed 0 00:20:37.938 06:59:42 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:37.938 06:59:42 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:20:41.227 Initializing NVMe Controllers 00:20:41.227 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:20:41.227 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:20:41.227 Initialization complete. Launching workers. 00:20:41.227 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 64897, failed: 0 00:20:41.227 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 27259, failed to submit 37638 00:20:41.227 success 0, unsuccess 27259, failed 0 00:20:41.227 06:59:45 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:41.227 06:59:45 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:20:44.514 Initializing NVMe Controllers 00:20:44.514 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:20:44.514 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:20:44.514 Initialization complete. Launching workers. 00:20:44.514 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 75318, failed: 0 00:20:44.514 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 18824, failed to submit 56494 00:20:44.514 success 0, unsuccess 18824, failed 0 00:20:44.514 06:59:48 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:20:44.514 06:59:48 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:20:44.514 06:59:48 -- nvmf/common.sh@677 -- # echo 0 00:20:44.514 06:59:48 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:20:44.514 06:59:48 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:20:44.514 06:59:48 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:20:44.514 06:59:48 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:20:44.514 06:59:48 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:20:44.514 06:59:48 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:20:44.514 ************************************ 00:20:44.514 END TEST kernel_target_abort 00:20:44.514 ************************************ 00:20:44.514 00:20:44.514 real 0m10.472s 00:20:44.514 user 0m5.517s 00:20:44.514 sys 0m2.399s 00:20:44.514 06:59:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:44.514 06:59:48 -- common/autotest_common.sh@10 -- # set +x 00:20:44.514 06:59:48 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:20:44.514 06:59:48 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:20:44.514 06:59:48 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:44.514 06:59:48 -- nvmf/common.sh@116 -- # sync 00:20:44.514 06:59:48 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:44.514 06:59:48 -- nvmf/common.sh@119 -- # set +e 00:20:44.514 06:59:48 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:44.514 06:59:48 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:44.514 rmmod nvme_tcp 00:20:44.514 rmmod nvme_fabrics 00:20:44.514 rmmod nvme_keyring 00:20:44.514 06:59:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:44.514 06:59:48 -- nvmf/common.sh@123 -- # set -e 00:20:44.514 06:59:48 -- nvmf/common.sh@124 -- # return 0 00:20:44.514 06:59:48 -- nvmf/common.sh@477 -- # '[' -n 87634 ']' 00:20:44.514 06:59:48 -- nvmf/common.sh@478 -- # killprocess 87634 00:20:44.514 Process with pid 87634 is not found 00:20:44.514 06:59:48 -- common/autotest_common.sh@936 -- # '[' -z 87634 ']' 00:20:44.514 06:59:48 -- common/autotest_common.sh@940 -- # kill -0 87634 00:20:44.514 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (87634) - No such process 00:20:44.514 06:59:48 -- common/autotest_common.sh@963 -- # echo 'Process with pid 87634 is not found' 00:20:44.514 06:59:48 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:20:44.514 06:59:48 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:45.082 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:45.082 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:20:45.082 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:20:45.082 06:59:49 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:45.082 06:59:49 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:45.082 06:59:49 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:45.082 06:59:49 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:45.082 06:59:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:45.082 06:59:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:45.082 06:59:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:45.082 06:59:49 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:45.082 00:20:45.082 real 0m24.348s 00:20:45.082 user 0m49.469s 00:20:45.082 sys 0m5.760s 00:20:45.082 06:59:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:45.082 ************************************ 00:20:45.082 END TEST nvmf_abort_qd_sizes 00:20:45.082 ************************************ 00:20:45.082 06:59:49 -- common/autotest_common.sh@10 -- # set +x 00:20:45.082 06:59:49 -- spdk/autotest.sh@298 -- # '[' 0 -eq 1 ']' 00:20:45.082 06:59:49 -- spdk/autotest.sh@302 -- # '[' 0 -eq 1 ']' 00:20:45.082 06:59:49 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:20:45.082 06:59:49 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:20:45.082 06:59:49 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:20:45.082 06:59:49 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:20:45.082 06:59:49 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:20:45.082 06:59:49 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:20:45.082 06:59:49 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:20:45.082 06:59:49 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:20:45.082 06:59:49 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:20:45.082 06:59:49 -- spdk/autotest.sh@353 -- # [[ 0 -eq 1 ]] 00:20:45.082 06:59:49 -- spdk/autotest.sh@357 -- # [[ 0 -eq 1 ]] 00:20:45.082 06:59:49 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:20:45.082 06:59:49 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:20:45.082 06:59:49 -- spdk/autotest.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:20:45.082 06:59:49 -- spdk/autotest.sh@372 -- # timing_enter post_cleanup 00:20:45.082 06:59:49 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:45.082 06:59:49 -- common/autotest_common.sh@10 -- # set +x 00:20:45.082 06:59:49 -- spdk/autotest.sh@373 -- # autotest_cleanup 00:20:45.082 06:59:49 -- common/autotest_common.sh@1381 -- # local autotest_es=0 00:20:45.082 06:59:49 -- common/autotest_common.sh@1382 -- # xtrace_disable 00:20:45.082 06:59:49 -- common/autotest_common.sh@10 -- # set +x 00:20:46.988 INFO: APP EXITING 00:20:46.988 INFO: killing all VMs 00:20:46.988 INFO: killing vhost app 00:20:46.988 INFO: EXIT DONE 00:20:47.556 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:47.556 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:20:47.556 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:20:48.125 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:48.125 Cleaning 00:20:48.125 Removing: /var/run/dpdk/spdk0/config 00:20:48.125 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:20:48.125 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:20:48.125 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:20:48.125 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:20:48.125 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:20:48.125 Removing: /var/run/dpdk/spdk0/hugepage_info 00:20:48.125 Removing: /var/run/dpdk/spdk1/config 00:20:48.125 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:20:48.125 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:20:48.125 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:20:48.125 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:20:48.125 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:20:48.125 Removing: /var/run/dpdk/spdk1/hugepage_info 00:20:48.125 Removing: /var/run/dpdk/spdk2/config 00:20:48.125 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:20:48.125 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:20:48.125 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:20:48.125 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:20:48.125 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:20:48.125 Removing: /var/run/dpdk/spdk2/hugepage_info 00:20:48.384 Removing: /var/run/dpdk/spdk3/config 00:20:48.384 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:20:48.384 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:20:48.384 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:20:48.384 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:20:48.384 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:20:48.384 Removing: /var/run/dpdk/spdk3/hugepage_info 00:20:48.384 Removing: /var/run/dpdk/spdk4/config 00:20:48.384 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:20:48.384 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:20:48.384 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:20:48.384 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:20:48.384 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:20:48.384 Removing: /var/run/dpdk/spdk4/hugepage_info 00:20:48.384 Removing: /dev/shm/nvmf_trace.0 00:20:48.384 Removing: /dev/shm/spdk_tgt_trace.pid65868 00:20:48.384 Removing: /var/run/dpdk/spdk0 00:20:48.384 Removing: /var/run/dpdk/spdk1 00:20:48.384 Removing: /var/run/dpdk/spdk2 00:20:48.384 Removing: /var/run/dpdk/spdk3 00:20:48.384 Removing: /var/run/dpdk/spdk4 00:20:48.384 Removing: /var/run/dpdk/spdk_pid65721 00:20:48.384 Removing: /var/run/dpdk/spdk_pid65868 00:20:48.384 Removing: /var/run/dpdk/spdk_pid66121 00:20:48.384 Removing: /var/run/dpdk/spdk_pid66317 00:20:48.384 Removing: /var/run/dpdk/spdk_pid66470 00:20:48.384 Removing: /var/run/dpdk/spdk_pid66536 00:20:48.384 Removing: /var/run/dpdk/spdk_pid66619 00:20:48.385 Removing: /var/run/dpdk/spdk_pid66717 00:20:48.385 Removing: /var/run/dpdk/spdk_pid66790 00:20:48.385 Removing: /var/run/dpdk/spdk_pid66834 00:20:48.385 Removing: /var/run/dpdk/spdk_pid66864 00:20:48.385 Removing: /var/run/dpdk/spdk_pid66927 00:20:48.385 Removing: /var/run/dpdk/spdk_pid67019 00:20:48.385 Removing: /var/run/dpdk/spdk_pid67446 00:20:48.385 Removing: /var/run/dpdk/spdk_pid67492 00:20:48.385 Removing: /var/run/dpdk/spdk_pid67538 00:20:48.385 Removing: /var/run/dpdk/spdk_pid67554 00:20:48.385 Removing: /var/run/dpdk/spdk_pid67615 00:20:48.385 Removing: /var/run/dpdk/spdk_pid67631 00:20:48.385 Removing: /var/run/dpdk/spdk_pid67693 00:20:48.385 Removing: /var/run/dpdk/spdk_pid67709 00:20:48.385 Removing: /var/run/dpdk/spdk_pid67749 00:20:48.385 Removing: /var/run/dpdk/spdk_pid67767 00:20:48.385 Removing: /var/run/dpdk/spdk_pid67807 00:20:48.385 Removing: /var/run/dpdk/spdk_pid67825 00:20:48.385 Removing: /var/run/dpdk/spdk_pid67962 00:20:48.385 Removing: /var/run/dpdk/spdk_pid67992 00:20:48.385 Removing: /var/run/dpdk/spdk_pid68068 00:20:48.385 Removing: /var/run/dpdk/spdk_pid68125 00:20:48.385 Removing: /var/run/dpdk/spdk_pid68144 00:20:48.385 Removing: /var/run/dpdk/spdk_pid68209 00:20:48.385 Removing: /var/run/dpdk/spdk_pid68223 00:20:48.385 Removing: /var/run/dpdk/spdk_pid68252 00:20:48.385 Removing: /var/run/dpdk/spdk_pid68266 00:20:48.385 Removing: /var/run/dpdk/spdk_pid68306 00:20:48.385 Removing: /var/run/dpdk/spdk_pid68320 00:20:48.385 Removing: /var/run/dpdk/spdk_pid68349 00:20:48.385 Removing: /var/run/dpdk/spdk_pid68376 00:20:48.385 Removing: /var/run/dpdk/spdk_pid68405 00:20:48.385 Removing: /var/run/dpdk/spdk_pid68419 00:20:48.385 Removing: /var/run/dpdk/spdk_pid68454 00:20:48.385 Removing: /var/run/dpdk/spdk_pid68473 00:20:48.385 Removing: /var/run/dpdk/spdk_pid68502 00:20:48.385 Removing: /var/run/dpdk/spdk_pid68522 00:20:48.385 Removing: /var/run/dpdk/spdk_pid68555 00:20:48.385 Removing: /var/run/dpdk/spdk_pid68570 00:20:48.385 Removing: /var/run/dpdk/spdk_pid68605 00:20:48.385 Removing: /var/run/dpdk/spdk_pid68619 00:20:48.385 Removing: /var/run/dpdk/spdk_pid68653 00:20:48.385 Removing: /var/run/dpdk/spdk_pid68673 00:20:48.385 Removing: /var/run/dpdk/spdk_pid68702 00:20:48.385 Removing: /var/run/dpdk/spdk_pid68716 00:20:48.385 Removing: /var/run/dpdk/spdk_pid68750 00:20:48.385 Removing: /var/run/dpdk/spdk_pid68770 00:20:48.385 Removing: /var/run/dpdk/spdk_pid68801 00:20:48.385 Removing: /var/run/dpdk/spdk_pid68820 00:20:48.385 Removing: /var/run/dpdk/spdk_pid68855 00:20:48.385 Removing: /var/run/dpdk/spdk_pid68869 00:20:48.385 Removing: /var/run/dpdk/spdk_pid68898 00:20:48.385 Removing: /var/run/dpdk/spdk_pid68923 00:20:48.385 Removing: /var/run/dpdk/spdk_pid68952 00:20:48.385 Removing: /var/run/dpdk/spdk_pid68966 00:20:48.385 Removing: /var/run/dpdk/spdk_pid69005 00:20:48.385 Removing: /var/run/dpdk/spdk_pid69023 00:20:48.385 Removing: /var/run/dpdk/spdk_pid69055 00:20:48.385 Removing: /var/run/dpdk/spdk_pid69078 00:20:48.385 Removing: /var/run/dpdk/spdk_pid69115 00:20:48.385 Removing: /var/run/dpdk/spdk_pid69129 00:20:48.385 Removing: /var/run/dpdk/spdk_pid69164 00:20:48.644 Removing: /var/run/dpdk/spdk_pid69180 00:20:48.644 Removing: /var/run/dpdk/spdk_pid69213 00:20:48.644 Removing: /var/run/dpdk/spdk_pid69285 00:20:48.644 Removing: /var/run/dpdk/spdk_pid69372 00:20:48.644 Removing: /var/run/dpdk/spdk_pid69704 00:20:48.644 Removing: /var/run/dpdk/spdk_pid69716 00:20:48.644 Removing: /var/run/dpdk/spdk_pid69751 00:20:48.644 Removing: /var/run/dpdk/spdk_pid69765 00:20:48.644 Removing: /var/run/dpdk/spdk_pid69773 00:20:48.644 Removing: /var/run/dpdk/spdk_pid69791 00:20:48.644 Removing: /var/run/dpdk/spdk_pid69803 00:20:48.644 Removing: /var/run/dpdk/spdk_pid69817 00:20:48.644 Removing: /var/run/dpdk/spdk_pid69835 00:20:48.644 Removing: /var/run/dpdk/spdk_pid69847 00:20:48.644 Removing: /var/run/dpdk/spdk_pid69861 00:20:48.644 Removing: /var/run/dpdk/spdk_pid69879 00:20:48.644 Removing: /var/run/dpdk/spdk_pid69890 00:20:48.644 Removing: /var/run/dpdk/spdk_pid69905 00:20:48.644 Removing: /var/run/dpdk/spdk_pid69923 00:20:48.644 Removing: /var/run/dpdk/spdk_pid69930 00:20:48.644 Removing: /var/run/dpdk/spdk_pid69949 00:20:48.644 Removing: /var/run/dpdk/spdk_pid69967 00:20:48.644 Removing: /var/run/dpdk/spdk_pid69974 00:20:48.644 Removing: /var/run/dpdk/spdk_pid69993 00:20:48.644 Removing: /var/run/dpdk/spdk_pid70017 00:20:48.644 Removing: /var/run/dpdk/spdk_pid70035 00:20:48.644 Removing: /var/run/dpdk/spdk_pid70057 00:20:48.644 Removing: /var/run/dpdk/spdk_pid70127 00:20:48.644 Removing: /var/run/dpdk/spdk_pid70148 00:20:48.644 Removing: /var/run/dpdk/spdk_pid70163 00:20:48.644 Removing: /var/run/dpdk/spdk_pid70186 00:20:48.644 Removing: /var/run/dpdk/spdk_pid70196 00:20:48.644 Removing: /var/run/dpdk/spdk_pid70203 00:20:48.644 Removing: /var/run/dpdk/spdk_pid70238 00:20:48.644 Removing: /var/run/dpdk/spdk_pid70255 00:20:48.644 Removing: /var/run/dpdk/spdk_pid70276 00:20:48.644 Removing: /var/run/dpdk/spdk_pid70289 00:20:48.644 Removing: /var/run/dpdk/spdk_pid70291 00:20:48.644 Removing: /var/run/dpdk/spdk_pid70299 00:20:48.644 Removing: /var/run/dpdk/spdk_pid70306 00:20:48.644 Removing: /var/run/dpdk/spdk_pid70314 00:20:48.644 Removing: /var/run/dpdk/spdk_pid70321 00:20:48.644 Removing: /var/run/dpdk/spdk_pid70323 00:20:48.644 Removing: /var/run/dpdk/spdk_pid70350 00:20:48.644 Removing: /var/run/dpdk/spdk_pid70376 00:20:48.644 Removing: /var/run/dpdk/spdk_pid70386 00:20:48.644 Removing: /var/run/dpdk/spdk_pid70414 00:20:48.644 Removing: /var/run/dpdk/spdk_pid70418 00:20:48.644 Removing: /var/run/dpdk/spdk_pid70426 00:20:48.644 Removing: /var/run/dpdk/spdk_pid70466 00:20:48.644 Removing: /var/run/dpdk/spdk_pid70478 00:20:48.644 Removing: /var/run/dpdk/spdk_pid70504 00:20:48.644 Removing: /var/run/dpdk/spdk_pid70506 00:20:48.644 Removing: /var/run/dpdk/spdk_pid70514 00:20:48.644 Removing: /var/run/dpdk/spdk_pid70521 00:20:48.644 Removing: /var/run/dpdk/spdk_pid70529 00:20:48.644 Removing: /var/run/dpdk/spdk_pid70531 00:20:48.644 Removing: /var/run/dpdk/spdk_pid70538 00:20:48.644 Removing: /var/run/dpdk/spdk_pid70546 00:20:48.644 Removing: /var/run/dpdk/spdk_pid70627 00:20:48.644 Removing: /var/run/dpdk/spdk_pid70663 00:20:48.644 Removing: /var/run/dpdk/spdk_pid70769 00:20:48.644 Removing: /var/run/dpdk/spdk_pid70801 00:20:48.644 Removing: /var/run/dpdk/spdk_pid70845 00:20:48.644 Removing: /var/run/dpdk/spdk_pid70854 00:20:48.644 Removing: /var/run/dpdk/spdk_pid70874 00:20:48.645 Removing: /var/run/dpdk/spdk_pid70893 00:20:48.645 Removing: /var/run/dpdk/spdk_pid70919 00:20:48.645 Removing: /var/run/dpdk/spdk_pid70939 00:20:48.645 Removing: /var/run/dpdk/spdk_pid71004 00:20:48.645 Removing: /var/run/dpdk/spdk_pid71018 00:20:48.645 Removing: /var/run/dpdk/spdk_pid71061 00:20:48.645 Removing: /var/run/dpdk/spdk_pid71136 00:20:48.645 Removing: /var/run/dpdk/spdk_pid71186 00:20:48.645 Removing: /var/run/dpdk/spdk_pid71209 00:20:48.645 Removing: /var/run/dpdk/spdk_pid71306 00:20:48.645 Removing: /var/run/dpdk/spdk_pid71347 00:20:48.645 Removing: /var/run/dpdk/spdk_pid71378 00:20:48.645 Removing: /var/run/dpdk/spdk_pid71602 00:20:48.645 Removing: /var/run/dpdk/spdk_pid71694 00:20:48.645 Removing: /var/run/dpdk/spdk_pid71721 00:20:48.645 Removing: /var/run/dpdk/spdk_pid72053 00:20:48.645 Removing: /var/run/dpdk/spdk_pid72091 00:20:48.645 Removing: /var/run/dpdk/spdk_pid72399 00:20:48.645 Removing: /var/run/dpdk/spdk_pid72805 00:20:48.645 Removing: /var/run/dpdk/spdk_pid73074 00:20:48.645 Removing: /var/run/dpdk/spdk_pid73823 00:20:48.645 Removing: /var/run/dpdk/spdk_pid74639 00:20:48.645 Removing: /var/run/dpdk/spdk_pid74751 00:20:48.645 Removing: /var/run/dpdk/spdk_pid74819 00:20:48.645 Removing: /var/run/dpdk/spdk_pid76086 00:20:48.645 Removing: /var/run/dpdk/spdk_pid76302 00:20:48.910 Removing: /var/run/dpdk/spdk_pid76621 00:20:48.910 Removing: /var/run/dpdk/spdk_pid76731 00:20:48.910 Removing: /var/run/dpdk/spdk_pid76866 00:20:48.910 Removing: /var/run/dpdk/spdk_pid76894 00:20:48.910 Removing: /var/run/dpdk/spdk_pid76921 00:20:48.910 Removing: /var/run/dpdk/spdk_pid76950 00:20:48.910 Removing: /var/run/dpdk/spdk_pid77027 00:20:48.910 Removing: /var/run/dpdk/spdk_pid77150 00:20:48.910 Removing: /var/run/dpdk/spdk_pid77292 00:20:48.910 Removing: /var/run/dpdk/spdk_pid77373 00:20:48.910 Removing: /var/run/dpdk/spdk_pid77762 00:20:48.910 Removing: /var/run/dpdk/spdk_pid78111 00:20:48.910 Removing: /var/run/dpdk/spdk_pid78113 00:20:48.910 Removing: /var/run/dpdk/spdk_pid80321 00:20:48.910 Removing: /var/run/dpdk/spdk_pid80323 00:20:48.910 Removing: /var/run/dpdk/spdk_pid80595 00:20:48.910 Removing: /var/run/dpdk/spdk_pid80613 00:20:48.910 Removing: /var/run/dpdk/spdk_pid80634 00:20:48.910 Removing: /var/run/dpdk/spdk_pid80659 00:20:48.910 Removing: /var/run/dpdk/spdk_pid80664 00:20:48.910 Removing: /var/run/dpdk/spdk_pid80755 00:20:48.910 Removing: /var/run/dpdk/spdk_pid80757 00:20:48.910 Removing: /var/run/dpdk/spdk_pid80865 00:20:48.910 Removing: /var/run/dpdk/spdk_pid80872 00:20:48.910 Removing: /var/run/dpdk/spdk_pid80980 00:20:48.910 Removing: /var/run/dpdk/spdk_pid80988 00:20:48.910 Removing: /var/run/dpdk/spdk_pid81395 00:20:48.910 Removing: /var/run/dpdk/spdk_pid81443 00:20:48.910 Removing: /var/run/dpdk/spdk_pid81553 00:20:48.910 Removing: /var/run/dpdk/spdk_pid81631 00:20:48.910 Removing: /var/run/dpdk/spdk_pid81944 00:20:48.910 Removing: /var/run/dpdk/spdk_pid82148 00:20:48.910 Removing: /var/run/dpdk/spdk_pid82513 00:20:48.911 Removing: /var/run/dpdk/spdk_pid83044 00:20:48.911 Removing: /var/run/dpdk/spdk_pid83479 00:20:48.911 Removing: /var/run/dpdk/spdk_pid83539 00:20:48.911 Removing: /var/run/dpdk/spdk_pid83587 00:20:48.911 Removing: /var/run/dpdk/spdk_pid83641 00:20:48.911 Removing: /var/run/dpdk/spdk_pid83738 00:20:48.911 Removing: /var/run/dpdk/spdk_pid83798 00:20:48.911 Removing: /var/run/dpdk/spdk_pid83846 00:20:48.911 Removing: /var/run/dpdk/spdk_pid83911 00:20:48.911 Removing: /var/run/dpdk/spdk_pid84222 00:20:48.911 Removing: /var/run/dpdk/spdk_pid85393 00:20:48.911 Removing: /var/run/dpdk/spdk_pid85526 00:20:48.911 Removing: /var/run/dpdk/spdk_pid85774 00:20:48.911 Removing: /var/run/dpdk/spdk_pid86329 00:20:48.911 Removing: /var/run/dpdk/spdk_pid86489 00:20:48.911 Removing: /var/run/dpdk/spdk_pid86650 00:20:48.911 Removing: /var/run/dpdk/spdk_pid86748 00:20:48.911 Removing: /var/run/dpdk/spdk_pid86911 00:20:48.911 Removing: /var/run/dpdk/spdk_pid87024 00:20:48.911 Removing: /var/run/dpdk/spdk_pid87686 00:20:48.911 Removing: /var/run/dpdk/spdk_pid87722 00:20:48.911 Removing: /var/run/dpdk/spdk_pid87757 00:20:48.911 Removing: /var/run/dpdk/spdk_pid88001 00:20:48.911 Removing: /var/run/dpdk/spdk_pid88036 00:20:48.911 Removing: /var/run/dpdk/spdk_pid88072 00:20:48.911 Clean 00:20:48.911 killing process with pid 60113 00:20:49.170 killing process with pid 60114 00:20:49.170 06:59:53 -- common/autotest_common.sh@1446 -- # return 0 00:20:49.170 06:59:53 -- spdk/autotest.sh@374 -- # timing_exit post_cleanup 00:20:49.170 06:59:53 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:49.170 06:59:53 -- common/autotest_common.sh@10 -- # set +x 00:20:49.170 06:59:53 -- spdk/autotest.sh@376 -- # timing_exit autotest 00:20:49.170 06:59:53 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:49.170 06:59:53 -- common/autotest_common.sh@10 -- # set +x 00:20:49.170 06:59:53 -- spdk/autotest.sh@377 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:49.170 06:59:53 -- spdk/autotest.sh@379 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:20:49.170 06:59:53 -- spdk/autotest.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:20:49.170 06:59:53 -- spdk/autotest.sh@381 -- # [[ y == y ]] 00:20:49.170 06:59:53 -- spdk/autotest.sh@383 -- # hostname 00:20:49.170 06:59:53 -- spdk/autotest.sh@383 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:20:49.429 geninfo: WARNING: invalid characters removed from testname! 00:21:11.362 07:00:15 -- spdk/autotest.sh@384 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:14.648 07:00:19 -- spdk/autotest.sh@385 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:17.183 07:00:21 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:19.717 07:00:23 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:22.285 07:00:26 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:24.819 07:00:28 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:26.723 07:00:31 -- spdk/autotest.sh@393 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:21:26.723 07:00:31 -- common/autotest_common.sh@1689 -- $ [[ y == y ]] 00:21:26.723 07:00:31 -- common/autotest_common.sh@1690 -- $ lcov --version 00:21:26.723 07:00:31 -- common/autotest_common.sh@1690 -- $ awk '{print $NF}' 00:21:26.982 07:00:31 -- common/autotest_common.sh@1690 -- $ lt 1.15 2 00:21:26.982 07:00:31 -- scripts/common.sh@372 -- $ cmp_versions 1.15 '<' 2 00:21:26.982 07:00:31 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:21:26.982 07:00:31 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:21:26.982 07:00:31 -- scripts/common.sh@335 -- $ IFS=.-: 00:21:26.982 07:00:31 -- scripts/common.sh@335 -- $ read -ra ver1 00:21:26.982 07:00:31 -- scripts/common.sh@336 -- $ IFS=.-: 00:21:26.982 07:00:31 -- scripts/common.sh@336 -- $ read -ra ver2 00:21:26.982 07:00:31 -- scripts/common.sh@337 -- $ local 'op=<' 00:21:26.982 07:00:31 -- scripts/common.sh@339 -- $ ver1_l=2 00:21:26.982 07:00:31 -- scripts/common.sh@340 -- $ ver2_l=1 00:21:26.982 07:00:31 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:21:26.982 07:00:31 -- scripts/common.sh@343 -- $ case "$op" in 00:21:26.982 07:00:31 -- scripts/common.sh@344 -- $ : 1 00:21:26.982 07:00:31 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:21:26.982 07:00:31 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:26.982 07:00:31 -- scripts/common.sh@364 -- $ decimal 1 00:21:26.982 07:00:31 -- scripts/common.sh@352 -- $ local d=1 00:21:26.982 07:00:31 -- scripts/common.sh@353 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:21:26.982 07:00:31 -- scripts/common.sh@354 -- $ echo 1 00:21:26.982 07:00:31 -- scripts/common.sh@364 -- $ ver1[v]=1 00:21:26.982 07:00:31 -- scripts/common.sh@365 -- $ decimal 2 00:21:26.982 07:00:31 -- scripts/common.sh@352 -- $ local d=2 00:21:26.982 07:00:31 -- scripts/common.sh@353 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:21:26.982 07:00:31 -- scripts/common.sh@354 -- $ echo 2 00:21:26.982 07:00:31 -- scripts/common.sh@365 -- $ ver2[v]=2 00:21:26.982 07:00:31 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:21:26.982 07:00:31 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:21:26.982 07:00:31 -- scripts/common.sh@367 -- $ return 0 00:21:26.982 07:00:31 -- common/autotest_common.sh@1691 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:26.982 07:00:31 -- common/autotest_common.sh@1703 -- $ export 'LCOV_OPTS= 00:21:26.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:26.982 --rc genhtml_branch_coverage=1 00:21:26.982 --rc genhtml_function_coverage=1 00:21:26.982 --rc genhtml_legend=1 00:21:26.982 --rc geninfo_all_blocks=1 00:21:26.982 --rc geninfo_unexecuted_blocks=1 00:21:26.982 00:21:26.982 ' 00:21:26.982 07:00:31 -- common/autotest_common.sh@1703 -- $ LCOV_OPTS=' 00:21:26.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:26.982 --rc genhtml_branch_coverage=1 00:21:26.982 --rc genhtml_function_coverage=1 00:21:26.982 --rc genhtml_legend=1 00:21:26.982 --rc geninfo_all_blocks=1 00:21:26.982 --rc geninfo_unexecuted_blocks=1 00:21:26.982 00:21:26.982 ' 00:21:26.982 07:00:31 -- common/autotest_common.sh@1704 -- $ export 'LCOV=lcov 00:21:26.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:26.982 --rc genhtml_branch_coverage=1 00:21:26.982 --rc genhtml_function_coverage=1 00:21:26.982 --rc genhtml_legend=1 00:21:26.982 --rc geninfo_all_blocks=1 00:21:26.982 --rc geninfo_unexecuted_blocks=1 00:21:26.982 00:21:26.982 ' 00:21:26.982 07:00:31 -- common/autotest_common.sh@1704 -- $ LCOV='lcov 00:21:26.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:26.982 --rc genhtml_branch_coverage=1 00:21:26.982 --rc genhtml_function_coverage=1 00:21:26.982 --rc genhtml_legend=1 00:21:26.982 --rc geninfo_all_blocks=1 00:21:26.982 --rc geninfo_unexecuted_blocks=1 00:21:26.982 00:21:26.982 ' 00:21:26.982 07:00:31 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:26.982 07:00:31 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:21:26.982 07:00:31 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:26.982 07:00:31 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:26.982 07:00:31 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.982 07:00:31 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.982 07:00:31 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.982 07:00:31 -- paths/export.sh@5 -- $ export PATH 00:21:26.982 07:00:31 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.982 07:00:31 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:21:26.982 07:00:31 -- common/autobuild_common.sh@440 -- $ date +%s 00:21:26.982 07:00:31 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1734073231.XXXXXX 00:21:26.982 07:00:31 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1734073231.cNvXtr 00:21:26.982 07:00:31 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:21:26.982 07:00:31 -- common/autobuild_common.sh@446 -- $ '[' -n v23.11 ']' 00:21:26.982 07:00:31 -- common/autobuild_common.sh@447 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:21:26.982 07:00:31 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:21:26.982 07:00:31 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:21:26.982 07:00:31 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:21:26.982 07:00:31 -- common/autobuild_common.sh@456 -- $ get_config_params 00:21:26.982 07:00:31 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:21:26.982 07:00:31 -- common/autotest_common.sh@10 -- $ set +x 00:21:26.982 07:00:31 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:21:26.982 07:00:31 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:21:26.982 07:00:31 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:21:26.982 07:00:31 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:21:26.982 07:00:31 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:21:26.982 07:00:31 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:21:26.982 07:00:31 -- spdk/autopackage.sh@19 -- $ timing_finish 00:21:26.982 07:00:31 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:21:26.982 07:00:31 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:21:26.982 07:00:31 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:21:26.982 07:00:31 -- spdk/autopackage.sh@20 -- $ exit 0 00:21:26.982 + [[ -n 5978 ]] 00:21:26.982 + sudo kill 5978 00:21:26.992 [Pipeline] } 00:21:27.008 [Pipeline] // timeout 00:21:27.013 [Pipeline] } 00:21:27.028 [Pipeline] // stage 00:21:27.034 [Pipeline] } 00:21:27.048 [Pipeline] // catchError 00:21:27.058 [Pipeline] stage 00:21:27.060 [Pipeline] { (Stop VM) 00:21:27.072 [Pipeline] sh 00:21:27.352 + vagrant halt 00:21:30.641 ==> default: Halting domain... 00:21:37.242 [Pipeline] sh 00:21:37.523 + vagrant destroy -f 00:21:40.810 ==> default: Removing domain... 00:21:40.821 [Pipeline] sh 00:21:41.099 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:21:41.107 [Pipeline] } 00:21:41.121 [Pipeline] // stage 00:21:41.126 [Pipeline] } 00:21:41.139 [Pipeline] // dir 00:21:41.144 [Pipeline] } 00:21:41.157 [Pipeline] // wrap 00:21:41.163 [Pipeline] } 00:21:41.175 [Pipeline] // catchError 00:21:41.184 [Pipeline] stage 00:21:41.186 [Pipeline] { (Epilogue) 00:21:41.198 [Pipeline] sh 00:21:41.480 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:21:46.764 [Pipeline] catchError 00:21:46.766 [Pipeline] { 00:21:46.779 [Pipeline] sh 00:21:47.061 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:21:47.320 Artifacts sizes are good 00:21:47.329 [Pipeline] } 00:21:47.343 [Pipeline] // catchError 00:21:47.355 [Pipeline] archiveArtifacts 00:21:47.362 Archiving artifacts 00:21:47.479 [Pipeline] cleanWs 00:21:47.490 [WS-CLEANUP] Deleting project workspace... 00:21:47.490 [WS-CLEANUP] Deferred wipeout is used... 00:21:47.497 [WS-CLEANUP] done 00:21:47.499 [Pipeline] } 00:21:47.514 [Pipeline] // stage 00:21:47.519 [Pipeline] } 00:21:47.533 [Pipeline] // node 00:21:47.538 [Pipeline] End of Pipeline 00:21:47.593 Finished: SUCCESS